two new AI chips to compete with Nvidia
Google Cloud launches two new AI chips to compete with Nvidia
Google Cloud launches two new AI chips to compete with Nvidia

Custom AI chip architecture
Purpose-built tensor processors optimised for machine learning workloads
Integration with Google Cloud Platform
Direct access through standard cloud infrastructure without additional setup
Cost reduction
Lower per-unit pricing compared to previous TPU versions
Performance improvements
Faster processing speeds for model training and inference tasks
Flexible capacity
Rent compute resources on demand rather than purchasing hardware
Multi-chip support
Google Cloud continues offering Nvidia GPUs alongside TPUs for choice and flexibility
Training large language models and other deep learning models at scale
Running inference for AI applications with high throughput requirements
Organisations seeking cost savings on GPU expenses for existing Google Cloud deployments
Companies building custom AI models that need dedicated compute resources
Research institutions requiring significant computational capacity for AI experiments