IC Taiwan Grand Challenge Batch 2 Winner Introduction – TurboNext.ai

 2025-05-07 By: InnoVEX Team

TurboNext.ai scales LLM infrastructure with heterogeneous compute and cost-efficient memory. Its software enables efficient execution across diverse GPUs, while its silicon features a large language model (LLM) accelerator and terabytes of local memory for faster AI processing. A Singapore-based company, TurboNext.ai, is transforming the economics of Generative AI by harnessing heterogeneous compute and cost-effective memory solutions, optimizing LLM workloads with model-specific resource allocation and workload-defined hardware.



Better Efficiency is needed for LLM to go to Scale


Large Language Model (LLM) applications face unique operational challenges where demand and usage can periodically spike, creating unstable operations and a suboptimal performance-to-expense ratio. According to Grand View Research, LLM as an industry was estimated to be valued at USD 5.617 billion in 2024 and expected to grow at a CAGR of 36.9% from 2025 to 2030, reaching USD 35.434 billion by 2030. Such a rapid growth is expected to further strain current operational hardware, which might exacerbate the existing operational concerns/ issues.

Introducing IC Taiwan Grand Challenge


To boost Taiwan's position as a global semiconductor powerhouse, the National Science and Technology Council (NSTC) organizes the IC Taiwan Grand Challenge. Focusing on teams actively leveraging innovative IC design to develop advanced application solutions or enhanced technologies, the ICTGC puts special attention on teams in the fields of AI Core Technologies and Chips, Smart Mobility, Smart Manufacturing, Smart Medtech, and Sustainability. TurboNext.ai was selected as one of the winners of the AI Core Technologies and Chips category with their TurboNext.ai LLM Heterogeneous Compute Platform.

Heterogeneous Computing Improves LLM Performance


The computational demands of Large Language Models (LLMs) and other AI workloads are escalating rapidly, which will raise both the total operational cost and hardware requirements to make sure the model can function properly. This will force users to consider their LLM strategy to optimize operational cost and prevent unnecessary downtime. TurboNext.ai’s Heterogeneous Compute Platform enables efficient execution across diverse GPUs, while its silicon features an LLM accelerator and terabytes of local memory for faster AI processing.

Current conventional AI inference hardware focuses on GPUs, which can cost as high as USD 300,000 per rack, and a model with high user traffic might need more than 1 rack, which can be prohibitively expensive, especially for startups. AI-focused enterprises often design their LLM infrastructure for peak workloads, leading to needless purchases of resources and an inability to adapt when the user traffic changes. TurboNext.ai’s Heterogeneous Compute system allows users to optimize their existing hardware resources with workload-specific allocation and performance adjustments.

TurboNext.ai has also developed their NPU chip which offers high throughput for transformer architecture LLM models, scalability for tensor & intra-model parallelism, large memory for LLM requirement using DRAM, and balances compute and memory bandwidth optimized for LLM inference. The platform effectively combines software, NPU, and system; utilizing the existing GPUs as co-processors to offer the highest LLM performance at scale and the world’s most efficient token generation rate.

Sign Up for the Third Batch of IC Taiwan Grand Challenge


Registration for the next batch of the IC Taiwan Grand Challenge started on March 26 and will end on June 30. Startups planning to collaborate with Taiwan's semiconductor industry can register now.

Sign up here:
>>>https://ictaiwanchallenge.org/