Categories
Corporate

Roche expands AI computing with Nvidia chips

Swiss pharmaceutical company Roche has significantly expanded its artificial intelligence (AI) infrastructure by purchasing thousands of advanced chips from Nvidia, aiming to accelerate drug discovery and improve the efficiency of its research and development operations.

The company has installed more than 2,000 high-performance graphics processing units (GPUs) across its research centres in the United States and Europe. These chips provide the computing power required to process vast amounts of biomedical data and run complex simulations used in modern drug development.

By expanding its AI computing capacity, Roche plans to speed up several stages of the pharmaceutical research process. Scientists will be able to analyse large clinical and biological datasets faster, design potential drug molecules more efficiently and simulate how treatments may work in the human body before they enter clinical trials.

The investment is part of Roche’s ongoing collaboration with Nvidia to integrate advanced AI tools into pharmaceutical research. The enhanced computing platform will support the development of AI models capable of identifying promising drug targets, predicting outcomes in clinical trials and improving diagnostics.

According to Roche executives, faster computing power is becoming essential in the pharmaceutical industry as companies attempt to shorten the long timelines associated with drug development. Developing a new medicine can often take more than a decade and cost billions of dollars, making technologies that increase research productivity highly valuable.

With the latest deployment, Roche has built one of the largest AI-focused computing infrastructures in the pharmaceutical sector. The company expects the expanded system to help researchers run complex analyses in hours instead of days, allowing teams to test more hypotheses and accelerate scientific discovery.

Also Read: NCLT clears Adani Enterprises plan to acquire Jaiprakash Associates

Categories
Leaders

Jensen Huang projects $1 trillion AI revenue by 2027

Jensen Huang has projected that the artificial intelligence (AI) computing market could generate up to $1 trillion in revenue by 2027, reflecting the rapid expansion of AI infrastructure worldwide.

Speaking at the company’s annual developer conference, Nvidia GTC in San Jose, Huang said the demand for AI chips and data-center systems is rising much faster than previously expected. The estimate is significantly higher than earlier projections and even exceeds the most optimistic forecasts from analysts.

Just last year, Nvidia had suggested that the market opportunity for AI data-center hardware could reach about $500 billion. However, Huang said accelerating investments by major technology companies and cloud providers have pushed the potential market size much higher.

Large technology firms are rapidly building AI infrastructure to train and deploy increasingly powerful AI models. This has led to strong demand for Nvidia’s advanced processors and integrated computing systems used in data centers around the world.

During his keynote address, Huang also introduced new AI platforms designed to support the next generation of computing workloads. These include systems based on Nvidia’s Blackwell architecture and future platforms such as Vera Rubin, aimed at powering large-scale AI data centers.

According to Huang, the AI industry is now entering a new phase focused on AI inference, the stage where trained models are deployed to perform real-time tasks. This includes applications such as digital assistants, automated software systems, robotics, and autonomous machines.

The shift toward inference computing is expected to significantly increase the demand for AI hardware and specialized data-center infrastructure. Nvidia believes this trend will drive the next wave of growth for the semiconductor industry.

Also Read: Chrome update brings bookmarks bar to Android

Categories
Technology

Nvidia plans AI laptop chips launch in 2026

Nvidia is preparing to launch a new range of artificial-intelligence-focused laptop chips in the first half of 2026, marking a major expansion beyond its traditional graphics processor business.

The upcoming processors are expected to be built on Arm architecture and will combine CPU and GPU functions into a single chip. This integrated design aims to deliver high performance while using less power, making it suitable for thin and lightweight laptops.

The new platform is being developed to run advanced AI features directly on the device. This means tasks such as real-time translation, content creation, smart assistants and image processing can work faster without depending heavily on cloud computing. Running AI locally also improves data privacy and reduces latency.

With this move, Nvidia will enter the laptop CPU market and compete more directly with long-time PC chip leaders Intel and AMD. The launch is expected to be part of a broader industry shift toward so-called AI PCs, computers designed to handle artificial intelligence workloads on the device itself.

The chips are also likely to benefit from Nvidia’s strong AI software ecosystem, which is widely used by developers and enterprises. This could make it easier for laptop manufacturers to introduce AI features in their products.

For the PC industry, the entry of Nvidia into the CPU space could reshape competition by adding a powerful new player with deep expertise in AI computing. For Nvidia, it represents a strategic step toward becoming a full-platform computing company rather than just a GPU supplier.

While the company has not announced an exact launch date, industry reports suggest that laptops powered by these processors could begin appearing in the market sometime in 2026.

Also Read: US hits Indian solar imports with 126% duty

Categories
Technology

NVIDIA brings GeForce NOW to Amazon Fire TV

NVIDIA has launched GeForce NOW on Amazon Fire TV Stick, letting users play PC games on their TVs without a console or gaming PC. The service streams games from NVIDIA’s cloud servers, so all processing happens remotely, while the gameplay is sent over the internet.

Supported Fire TV devices include Fire TV Stick 4K Plus (2nd Gen) and Fire TV Stick 4K Max (1st & 2nd Gen). Players need a compatible controller and a stable internet connection to enjoy smooth gameplay. The games stream at 1080p resolution and 60 frames per second, giving TV viewers clear and responsive performance.

Gamers can access titles from popular PC stores like Steam, Epic Games Store, and Battle.net. NVIDIA also added eight new games to the service, including Kingdom Come: Deliverance, Mega Man 11, and the Street Fighter 30th Anniversary Collection.

This launch brings cloud gaming into living rooms, letting people enjoy console-like experiences without extra hardware. Currently, Fire TV streaming is limited to 1080p and does not support HDR or 4K, but it offers an entry-level way to experience PC gaming on a TV.

By expanding to Fire TV, NVIDIA can compete with other cloud gaming platforms like Amazon Luna and Xbox Cloud Gaming, which also focus on TV streaming. The service is rolling out globally, but it is not yet available in India, with plans to expand regionally in the future.

Also Read: Citigroup CEO Jane Fraser’s $42 mn pay sparks debate

Categories
Leaders

India’s data centres seen as job engine, says Nvidia CEO

India is emerging as a global hotspot for artificial intelligence (AI) and digital infrastructure, drawing recognition and investment from international leaders. United Nations Secretary-General António Guterres praised India for its leadership in AI, noting the country’s role in shaping global AI discussions and promoting inclusive governance.

Nvidia CEO Jensen Huang predicted that India’s AI and data-centre expansion could create a surge in employment, reminiscent of the internet boom. According to Huang, constructing data centres will generate jobs for electricians, plumbers, architects, and project managers. Beyond that, a wide range of indirect roles in operations, supply chains, and startups could also emerge.

The Indian government is supporting this growth with strategic incentives. The 2026 Budget extended tax holidays for foreign companies setting up data centres in India through 2047, aiming to attract massive investment in cloud computing and AI services. Experts in the industry estimate that this could bring in billions of dollars, further strengthening India’s position in the global tech ecosystem.

India is also hosting the India AI Impact Summit 2026 in New Delhi, bringing together global leaders, ministers, technology executives, and academics. The summit will focus on AI governance, capacity building, and international collaboration, including platforms like the Global Digital Compact.

Also Read: Bengaluru startup founder’s US visa rejected in Delhi

Categories
1 Minute-Read

Nvidia backs CoreWeave with $2bn investment

Nvidia is investing $2 billion in CoreWeave, buying around 23 million shares and becoming its second-largest shareholder. The funding comes with an expanded partnership to accelerate AI-focused data centre development.

CoreWeave, once a crypto miner, now provides high-performance computing to tech firms using Nvidia chips. The investment will help CoreWeave secure land, power, and infrastructure to build over 5 gigawatts of AI compute capacity by 2030, meeting growing demand for AI services.

The announcement lifted CoreWeave shares, while Nvidia stock showed mixed reactions.

Categories
Corporate

Nvidia’s H200 chip blocked in China

Nvidia, led by CEO Jensen Huang, has run into an unexpected hurdle in China, as customs authorities have blocked shipments of the company’s advanced H200 artificial intelligence (AI) chip. The sudden move has forced suppliers to pause production, creating uncertainty for Chinese tech companies eager to use the processor.

The H200 chip, one of Nvidia’s most powerful AI products, had been cleared for export by the US government, and Huang’s team was preparing to start shipments as early as March. Nvidia had also ramped up component production to meet strong demand from Chinese clients.

However, Chinese customs recently informed logistics agents that the H200 would not be allowed into the country. Officials did not give a reason, leaving the company and its partners uncertain whether the block is temporary or part of a broader policy.

Many components made for the H200 are highly specialised and cannot easily be repurposed, prompting suppliers to halt production to avoid building unsellable inventory. Chinese authorities have also reportedly advised local tech firms to avoid buying the chips unless essential, further dampening demand.

Neither Nvidia nor Chinese officials have publicly commented on the blockage. For Huang and his team, the future of H200 shipments to China remains uncertain as the global AI chip trade faces growing complexity.

Also Read: Reliance Industries Q3 revenue up 10% on digital, O2C strength

Categories
Technology

Nvidia introduces Rubin Chip architecture

Nvidia has revealed its Rubin AI platform, a new chip architecture aimed at supporting the next generation of artificial intelligence systems. The announcement signals Nvidia’s continued push to stay ahead of rising AI computing demands, particularly as models become more complex and reasoning-driven.

Rubin is designed to succeed the Blackwell architecture and offers substantial gains in AI inference and training performance. Nvidia says the platform is optimised for workloads that require long-context understanding, faster response times, and more efficient processing, making it suitable for large-scale AI applications across industries.

The company said Rubin is already in production and will be deployed more widely in the second half of 2026. With strong interest from major cloud and technology firms, the Rubin platform is expected to become a key building block for future AI infrastructure.

Unlike conventional chip launches, Rubin is built as a complete computing platform. It combines GPUs, CPUs, memory, networking, and data processing technologies into a tightly integrated system. This approach reduces latency and improves data movement, which is critical for handling large and distributed AI workloads in modern data centres.

Energy efficiency and cost reduction are central to the Rubin design. Nvidia claims the new architecture can significantly lower the cost of running AI models compared with previous platforms, while also cutting power consumption. This could help cloud providers and enterprises scale AI operations without proportionate increases in infrastructure costs.

Rubin is also aligned with the industry’s shift toward reasoning-based AI, where systems are expected to analyse information, maintain long contexts, and make more complex decisions. Nvidia believes this capability will define the next phase of AI development, moving beyond simple pattern recognition.

Also Read: India’s GDP likely to grow 7.4% in FY26

Categories
Corporate

Nvidia launches Alpamayo AI to boost self-driving cars

US chipmaker Nvidia has unveiled Alpamayo, a new artificial intelligence system designed to improve the safety and performance of autonomous vehicles. The announcement was made at the Consumer Electronics Show (CES) 2026 in Las Vegas, where the company showcased its latest advances in automotive AI.

Alpamayo is a reasoning-based AI model that allows self-driving cars to better understand their surroundings and decide how to respond to real-world situations. Unlike traditional systems that mainly detect objects, Alpamayo is built to analyse complex traffic scenarios, predict risks and choose safer driving actions.

At the core of the system is Alpamayo-1, a large Vision-Language-Action model with 10 billion parameters. It processes data from vehicle cameras and sensors, interprets what it sees and determines the most appropriate response, such as slowing down, stopping or changing lanes. Nvidia says the model can also explain its reasoning, which is expected to help developers improve safety and transparency.

Nvidia CEO Jensen Huang described Alpamayo as a breakthrough for what he called “physical AI”, comparing it to how conversational AI transformed digital applications. He said the company’s goal is to make autonomous driving systems more reliable, especially in rare and unpredictable road situations, often referred to as edge cases.

The Alpamayo platform also includes AlpaSim, a simulation environment that allows developers to test self-driving software in virtual settings before deploying it on real roads. Nvidia has released the tools and models as open source, encouraging global researchers and automakers to use and improve them.

The company plans to begin deploying Alpamayo-powered systems in vehicles later this year, starting with select Mercedes-Benz models in the United States.

Reacting to the announcement, Tesla CEO Elon Musk commented that achieving most of autonomous driving is relatively easy, but solving the final, rare scenarios remains the biggest challenge. His remarks highlight the growing competition and debate among technology leaders racing to perfect self-driving technology.

Also Read: Rupee gains 18 paise to 90.12 as dollar eases

Categories
Corporate

Nvidia acquires SchedMD, launches open-source Nemotron 3 AI

Nvidia is making a strong push into open-source artificial intelligence by acquiring SchedMD, the company behind the widely used Slurm workload manager, and unveiling a new family of AI models called Nemotron 3. These moves aim to expand access to AI tools for developers, researchers, and enterprises.

SchedMD develops Slurm, an open-source system that manages computing tasks across clusters and supercomputers. Nvidia’s acquisition will integrate Slurm into its AI and high-performance computing systems, but the company has assured users that Slurm will remain open-source and hardware-neutral. This ensures that research institutions and businesses can continue using and customizing the software freely.

Alongside this, Nvidia introduced Nemotron 3, which includes three models: Nano, Super, and Ultra. Nano is designed for efficient execution of smaller tasks, Super supports applications with multiple AI agents, and Ultra handles complex workloads requiring high performance. Nvidia has also released datasets, frameworks, and tools to help developers train, test, and adapt these models.

A key feature of Nemotron 3 is transparency. Nvidia is providing not just the model weights but also training data and the development framework. This openness allows developers to customize the models for different applications and contribute to their improvement.

The Nemotron 3 models are designed to deliver higher accuracy and faster performance while being flexible enough for deployment on cloud platforms and integration with popular open-source environments.

By combining Slurm’s infrastructure with open-source AI models, Nvidia is strengthening its role in the AI ecosystem. The company aims to foster collaboration, innovation, and accessibility, supporting developers and enterprises in building AI applications more efficiently and transparently.

Also Read: Google launches Pixel upgrade program in India