NVM Survey 25 Wide Banner for SemiWiki 800x100 px (1)

CEO Interview with Howard Pakosh of TekStart

CEO Interview with Howard Pakosh of TekStart
by Daniel Nenni on 09-26-2025 at 6:00 am

!HP biopic 2

Howard Pakosh is a serial entrepreneur and angel investor. Mr. Pakosh is also Founder & CEO of the TekStart Group, a Toronto-based boutique incubator focusing on Fractional-C business development support, as well as developing, promoting and licensing technology into markets such as blockchain, Internet-of-Things (IoT) and Semiconductor Intellectual Property (SIP). TekStart is currently an early-stage investor with Piera Systems (CleanTech), Acrylic Robotics (Robotics), Low Power Futures  (Semiconductors), ChipStart (Semiconductors), and Freedom Laser (Digital Health).

Mr. Pakosh has been involved in all phases of semiconductor development for over 30 years and was instrumental in the delivery of the first commercially available USB Subsystem at his first IP startup called Xentec and Elliptic Technologies Inc. (both sold to Synopsys Inc.). Other ventures he’s recently led, include the development of Micron’s Hybrid Memory Cube controller and the development of the most power-efficient crypto processing ASIC’s for SHA-256 and SCRYPT algorithms.

Tell us about your company.

When I started TekStart® in 1998, the mission was clear: give bold ideas the resources and leadership they need to become thriving businesses. The semiconductor field has always been high-stakes, demanding both creativity and flawless execution. Over time, TekStart has shifted from a commercialization partner to a true venture builder, now concentrating heavily on semiconductors and AI. Our purpose hasn’t changed. We exist to help innovators succeed. What has changed are our methods, which have adapted to an industry that’s become more global, more competitive, and far more complex.

What new features or technology are you working on?

What I am excited to share is an exciting breakthrough we have achieved through our ChipStart® business unit. With Newport by ChipStart, we’ve proven we’re not only enabling innovation but driving it ourselves. Achieving up to 65 TOPS of performance at under 2 watts is a leap forward, unlocking a new level of performance-per-watt that opens doors to applications once thought impossible.

What problems are you solving?

The semiconductor industry faces three defining challenges: fragile supply chains, the demand for radical energy efficiency, and the relentless race to market. Newport by ChipStart is built to meet these challenges head-on. Instead of designs tied to exotic nodes, we enable resilient architectures that keep innovation moving, even in uncertain times. Instead of incremental power gains, we push for performance that redefines efficiency by delivering more capability per watt. Instead of waiting on the pace of new fabs, we help innovators leap from concept to production silicon faster than ever. Newport isn’t just solving today’s problems. It’s shaping the future of how chips get built.

What application areas are your strongest?

We see the greatest impact in Edge devices that demand real-time intelligence without relying on the cloud. Security and surveillance systems, for example, need to analyze video on-site to detect threats instantly, without the latency of sending data off-premise. In agriculture, sensors and vision systems powered by AI can monitor crops, optimize water use, and detect early signs of disease, helping farmers boost yields sustainably. AR/VR wearables require high-performance AI that runs efficiently in small, battery-constrained form factors, enabling immersive experiences without bulky hardware. And in industrial automation, factories are increasingly reliant upon AI-driven systems to inspect products, predict equipment failures, and streamline processes. These are just a few of the areas where Edge AI is not just useful but transformative, and where Newport by ChipStart is purpose-built to deliver.

What keeps your customers up at night?

The pace of innovation in semiconductors and AI has never been faster, and it’s only accelerating. Our customers worry about launching a product only to find it outdated months later. Staying relevant requires moving from concept to market at unprecedented speed – and doing so without compromising quality or performance. That’s where TekStart, through Newport by ChipStart, makes a real difference. We partner closely with innovators to compress development cycles and deliver silicon that keeps pace with today’s AI-driven world. By helping our partners beat obsolescence, we ensure they stay ahead in markets where timing is everything.

What does the competitive landscape look like and how do you differentiate?

Competition in our space revolves around two unforgiving dimensions: time-to-market and innovation. Both demand relentless execution to stay ahead. We differentiate by combining deep semiconductor expertise with an ecosystem of partners who bring complementary strengths in design, manufacturing, and deployment. Our team has decades of hands-on experience across ASIC design, operations, and AI applications. When combined with our extended network, we’re able to anticipate shifts in technology and deliver solutions that arrive ahead of the curve. This balance of speed and foresight is what keeps our customers competitive and what sets us apart in a crowded landscape.

How do customers normally engage with your company?

We typically engage through close collaboration across the semiconductor supply chain. That means working side-by-side with fab houses, manufacturers, and technology partners to ensure our products integrate seamlessly into their final deliverables. By embedding our solutions at the heart of their systems – whether it’s in smart cameras, connected devices, or industrial machinery – we help our partners to accelerate their own roadmaps. These collaborations go beyond transactions. They’re strategic partnerships designed to align our innovation with their market needs.

Also Read:

TekStart Group Joins Canada’s Semiconductor Council

CEO Interview with Barun Kar of Upscale AI

CEO Interview with Adam Khan of Diamond Quanta


SkyWater Technology Update 2025

SkyWater Technology Update 2025
by Daniel Nenni on 09-25-2025 at 10:00 am

Skywater Technologies HQ

SkyWater Technology, a U.S. based pure-play semiconductor foundry, has made significant strides in 2025 reinforcing its position as a leader in domestic semiconductor manufacturing. Headquartered in Bloomington, Minnesota, SkyWater specializes in advanced innovation engineering and high volume manufacturing of differentiated integrated circuits. The company’s Technology as a Service model streamlines development and production, serving diverse markets including aerospace, defense, automotive, biomedical, industrial, and quantum computing.

A major milestone in 2025 was SkyWater’s acquisition of Infineon Technologies’ 200 mm semiconductor fab in Austin, Texas (Fab 25), completed on June 30. This acquisition added approximately 400,000 wafer starts per year, significantly boosting SkyWater’s capacity. Fab 25 enhances the company’s ability to produce foundational chips for embedded processors, memory, mixed-signal, RF, and power applications. By converting this facility into an open-access foundry SkyWater strengthens U.S. semiconductor independence aligning with national security and reshoring trends. The acquisition, funded through a $350 million senior secured revolving credit facility, also added about 1,000 employees to SkyWater’s workforce, bringing the total to approximately 1,700.

On July 29, SkyWater announced a license agreement with Infineon Technologies, granting access to a robust library of silicon-proven mixed-signal design IP. Originally developed by Cypress Semiconductor, this IP is validated for high-volume automotive-grade applications and is integrated into SkyWater’s S130 platform. The portfolio includes ADCs, DACs, power management, timing, and communications modules, enabling customers to design high-reliability mixed-signal System-on-Chips within a secure U.S. supply chain. This move positions SkyWater as a trusted partner for both commercial and defense markets thus reducing design risk and accelerating time to market.

SkyWater’s financial performance in 2025 reflects steady progress. The company reported second-quarter results at the upper end of expectations, with a trailing 12-month revenue of $290 million as of June 30. However, its Advanced Technology Services segment faced near-term softening due to federal budget delays impacting Department of Defense funding. Despite this, SkyWater remains confident in achieving record ATS revenue in 2025 provided funding issues are resolved. The company’s stock price stands at around $10.56 with a market capitalization of $555 million and 48.2 million shares outstanding.

Strategically, SkyWater is capitalizing on emerging technologies. Its collaboration with PsiQuantum to develop silicon photonic chips for utility-scale quantum computing highlights its expertise in cutting-edge applications. Additionally, SkyWater adopted YES RapidCure systems for its M-Series fan-out wafer level packaging (FOWLP) in partnership with Deca Technologies, enhancing prototyping speed and reliability for advanced packaging. These initiatives align with SkyWater’s focus on high-margin, innovative solutions, positioning it as a strategic partner in quantum computing and photonics.

SkyWater’s commitment to U.S.-based manufacturing and its DMEA-accredited Category 1A Trusted Foundry status underscore its role in supporting critical domestic markets. The company’s facilities are certified for aerospace (AS9100), medical (ISO13485), automotive (IATF16949), and environmental (ISO14001) standards, ensuring high-quality production. Despite challenges like funding delays and integration risks from the Fab 25 acquisition, SkyWater’s focus on innovation, strategic partnerships, and capacity expansion positions it for long-term growth. Analysts view SkyWater as a strong investment, with a 24-36 month price target of $20, reflecting confidence in its de-risked business model and alignment with U.S. reshoring trends.

Also Read:

Podcast EP307: An Overview of SkyWater Technology and its Goals with Ross Miller

Rapidus, IBM, and the Billion-Dollar Silicon Sovereignty Bet

TSMC 2025 Update: Riding the AI Wave Amid Global Expansion

Intel’s Commitment to Corporate Responsibility: Driving Innovation and Sustainability


TSMC’s Push for Energy-Efficient AI: Innovations in Logic and Packaging

TSMC’s Push for Energy-Efficient AI: Innovations in Logic and Packaging
by Daniel Nenni on 09-25-2025 at 8:00 am

TSMC OIP 2025

In his keynote at the TSMC OIP Ecosystem Forum Dr. LC Lu, TSMC Senior Fellow and Vice President, Research & Development / Design & Technology Platform, highlighted the exponential rise in power demand driven by AI proliferation. AI is embedding itself everywhere, from hyperscale data centers to edge devices, fueling new applications in daily life.

Evolving models, including embodied AI, chain-of-thought reasoning, and agentic systems, demand larger datasets, more complex computations, and extended processing times. This surge has led to AI accelerators consuming 3x more power per package in five years, with deployments scaling 8x in three years, making energy efficiency paramount for sustainable AI growth.

TSMC’s strategy focuses on advanced logic and 3D packaging innovations, coupled with ecosystem collaborations, to tackle this challenge. Starting with logic scaling, TSMC’s roadmap is robust: N2 will enter volume production in the second half of 2025, N2P slated for next year, A16 with backside power delivery by late 2026, and A14 progressing smoothly.

Enhancements to N3 and N5 continue to add value. From N7 to A14, speed at iso-power rises 1.8x, while power efficiency improves 4.2x, with each node offering about 30% power reduction over its predecessor. A16’s backside power targets AI and HPC chips with dense networks, yielding 8-10% speed gains or 15-20% power savings versus N2P.

N2 Nanoflex DTCO optimizes designs for dual high-speed and low-power cells, achieving 15% speed boosts or 25-30% power reductions. Foundation IP innovations further enhance efficiency. Optimized transmission gate flip-flops cut power by 10% with minimal speed (2%) and area (6%) trade-offs, sometimes outperforming state gate variants.

Dual-rail SRAM with turbo/nominal modes delivers 10% higher efficiency and 150mV lower Vmin, with area penalties optimized away. Compute-In-Memory stands out: TSMC’s digital CIM based Deep Learning Accelerator offers 4.5x TOPS/W and 7.8x TOPS/mm² over traditional 4nm DLAs, scaling from 22nm to 3nm and beyond. TSMC invites partnerships for further CIM advancements.

AI-driven design tools amplify these gains. Synopsys’ DSO.AI is the leader with reinforcement learning for PPA optimization, improving power efficiency by 5% in APR flows and 2% in metal stacks, totaling 7%. For analog designs integrations with TSMC APIs yield 20% efficiency boosts and denser layouts. AI assistants accelerate analysis 5-10x via natural language queries for power distribution insights.

Shifting to 3D packaging, TSMC’s 3D Fabric includes SoIC for silicon stacking, InFO for mobile/HPC chiplets, CoWoS for logic-HBM integration, and SoW for wafer-scale AI systems. Energy-efficient communication sees 2.5D CoWoS improving 1.6x with microbump pitches from 45µm to 25µm. 3D SoIC boosts efficiency 6.7x over 2.5D, though with smaller integration areas (1x reticle vs. 9.5x). Die-to-die IPs, aligned with UCIE standards, are available from partners like AlphaWave and Synopsys.

HBM integration advances: HBM4 on TSMC’s N12 logic base die provides 1.5x bandwidth and efficiency over HBM3e DRAM dies. N3P custom bases reduce voltage from 1.1V to 0.75V. Silicon photonics via co-packaged optics offers 5-10x efficiency, 10-20x lower latency, and compact forms versus pluggables. AI optimizations from Synopsys/ANSYS enhance this by 1.2x through co-design.

Decoupling capacitance innovations using Ultra High-Performance Metal-Insulator-Metal plus Embedded Deep Trench Capacitor enables 1.5x power density without integrity loss, modeled by Synopsys/ANSYS tools. EDA-AI automates EDTC insertion (10x productivity) and substrate routing (100x, with optimal signal integrity).

Bottom line: Moore’s Law is alive and well. Logic scaling delivers 4.2x efficiency from N7 to A14, CIM adds 4.5x IP/design innovations contribute 7-20%. Packaging yields 6.7x from 2.5D to 3D, 5-10x from photonics, and 1.5-2x from HBM/ Decoupling Capacitor advances, with AI boosting productivity 10-100x.

TSMC honored partners with the 2025 OIP Awards for contributions in A14/A16 infrastructure, multi-die solutions, AI design, RF migration, IP, 3D Fabric, and cloud services. It is all about the ecosystem, absolutely.

Exponential AI power needs demand such innovations. TSMC’s collaborations drive 5-10x gains fostering efficient, productive AI ecosystems. Looking ahead, deeper partnerships will unlock even more iterations for sustainable AI advancement.

Also Read:

MediaTek Develops Chip Utilizing TSMC’s 2nm Process, Achieving Milestones in Performance and Power Efficiency

TSMC’s 2024 Sustainability Report: Pioneering a Greener Semiconductor Future

TSMC 2025 Update: Riding the AI Wave Amid Global Expansion


Semiconductor Equipment Spending Healthy

Semiconductor Equipment Spending Healthy
by Bill Jewell on 09-24-2025 at 4:00 pm

Semiconductor Equipment Spend 2H 2025

Global spending on semiconductor manufacturing equipment totaled $33.07 billion in the 2nd quarter of 2025, according to SEMI and SEAJ. 2Q 2025 spending was up 23% from 2Q 2024. China had the largest spending at $11.36 billion, 34% of the total. However, China spending in 2Q 2025 was down 7% from 2Q 2024. Taiwan had the second largest amount and experienced the fastest growth, with 2Q 2025 spending $8.77 billion, up 125% from 2Q 2024. TSMC was the major driver of the increase in Taiwan, with its capital expenditures (CapEx) up 62% in the first half of 2025 versus the first half of 2024. South Korea spending was the third largest at $5.91 billion, up 31% from a year earlier.

North America showed the fastest growth in semiconductor equipment spending in 2024, with 4Q 2024 spending of $4.98 billion up 163% from $1.89 billion in 1Q 2024. However, North America spending in 1Q 2025 was $2.93 billion, down 41% from 4Q 2024. 2Q 2025 spending was down again at $2.76 billion. The spending drop can be attributed to delays in planned wafer fabs in the U.S. Intel has delayed completion of its wafer fab in New Albany, Ohio, until 2031 from its initial plan of 2025. Groundbreaking on Micron Technology’s wafer fab in Clay, New York, has been delayed until late 2025 from its original target of June 2024. Samsung reportedly delayed initial production at its new wafer fab in Taylor, Texas, to 2027 from an original goal of 2024.

Semiconductor equipment spending in Japan in 2Q 2025 was 2.68 billion, up 66% from 2Q 2024. Europe spending in 2Q 2025 was 0.72 billion, down 23% from a year earlier. Spending in the rest of the world (ROW) was 0.87 billion, down 28%.

The outlook for total semiconductor capital expenditures (CapEx) in 2025 remains essentially the same as our Semiconductor Intelligence estimates published in March 2025. We still project 2025 CapEx of $160 billion, up 3% from $155 billion in 2024. The outlook for 2026 CapEx is mixed. Intel expects CapEx to be lower in 2026 than its expected $18 billion in 2025. Micron Technology reported $13.8 billion in CapEx for its fiscal year ended in August 2025 and plans higher spending in fiscal year 2026. Texas Instruments projects 2026 CapEx of $2 billion to $5 billion compared to $5 billion in 2025. The company with the largest CapEx, TSMC projects a range from $38 billion to $42 billion in 2025. TSMC has not provided CapEx estimates for 2026, but investment bank Needham and Company predicts TSMC will increase CapEx to $45 billion in 2026 and $50 billion in 2027.

The U.S. CHIPS and Science Act was passed in 2022 to boost semiconductor manufacturing in the U.S. As reported by IEEE, most of the $30 billion proposed in the CHIPS Act was awarded in the two months after President Trump’s election in November 2024 and before his inauguration in January 2025. The Trump administration wants to revise the CHIPS Act but has not offered specific plans. In August, the U.S. government made an $8.9 billion investment in Intel for a 9.9% stake in the company. $5.7 billion of the investment came from grants approved but not yet awarded to Intel under the CHIPS Act. The remaining $3.2 billion in funding came from the Secure Enclave program which was awarded to Intel in September 2024. A contributor to Forbes questions the wisdom of the Intel investment.

U.S. Commerce Secretary Howard Lutnick is reportedly considering the U.S. government taking shares in other companies which have received money under the CHIPS Act. Thus, the Trump administration seems to be changing the terms of the CHIPS Act which was approved by Congress in 2022. Without any approval from Congress, the Trump administration is apparently taking back grant money and using it for equity investments.

Semiconductor Intelligence is a consulting firm providing market analysis, market insights and company analysis for anyone involved in the semiconductor industry – manufacturers, designers, foundries, suppliers, users or investors. Please contact me if you would like further information.

Bill Jewell
Semiconductor Intelligence, LLC
billjewell@sc-iq.com

Also Read:

Semiconductors Still Strong in 2025

U.S. Imports Shifting

Electronics Up, Smartphones down


Yuning Liang’s Painstaking Push to Make the RISC-V PC a Reality

Yuning Liang’s Painstaking Push to Make the RISC-V PC a Reality
by Jonah McLeod on 09-24-2025 at 10:00 am

Embedded World Germany

At Embedded World 2025 in Nuremberg, Germany, on March 11, 2025, Yuning Liang, DeepComputing Founder and CEO walked onto the stage with a mischievous smile and a challenge. “What’s the hardest product to make?” he asked rhetorically. “A laptop. It’s bloody hard… but we did it. You can swap the motherboard, you can upgrade, you can’t make excuses anymore. Use it, break it, fix it,” he exclaimed.

Looking back, it seems serendipity guided Liang’s path. He left China in the ’90s for England’s Midlands, where he studied electronics at university. A professor’s recommendation sent him to Singapore’s Nanyang Technological University on scholarship, launching his journey into computer engineering and AI research. He paused his PhD studies to join Xerox’s Asia-Pacific expansion. Five years there exposed him to proprietary systems and then to open platforms. At Nokia, he led feature phone platforms, watching the company’s fortunes collapse under Android’s rise. He then spent four years at Samsung driving open enterprise software. Huawei recruited him to optimize Android runtimes—an experience that inspired him to launch a security platform startup with several million in VC funding in 2018.

Liang never intended to be a hardware entrepreneur. “I’m a software guy. I was stupid enough to waste all my money on hardware,” he exclaimed during his Open-Source Summit Europe (OSS) presentation in Amsterdam, Wednesday August 27, 2025. His résumé backs him up: Xerox, Nokia, Samsung — all in software, from Java virtual machines (JVM) to mobile platforms. He honed his expertise putting JVMs onto PowerPC and ARM, he recalled at OSS Europe 2025. I was in charge of the Java platform for Nokia on the feature phones. We had ARM7, ARM9— not even Cortex at the time,” he intoned during his talk.

Then came the worldwide COVID lockdowns. During this time in China, RISC-V emerged as a strategic workaround for a nation seeking technological autonomy. This meant accelerating investment in homegrown RISC-V cores, toolchains, and ecosystems. The result was transformative. The architecture moved from a research curiosity into a national asset. The lockdown didn’t just expose vulnerabilities—it galvanized a shift toward open silicon, with RISC-V positioned as both a technical enabler and a geopolitical hedge. The IBM PC in the 1990s propelled The U.S. GDP growth to double for the decade. Could open-source computing based on RISC-V do the same today? That is Liang’s gamble.

After selling his previous startup and stuck at home, Liang shifted his focus to RISC-V hardware and looked for something else to do. Out of boredom and frustration, DeepComputing was born. At first it was small projects: an RC car, a drone, some smart speakers — all running on RISC-V. “I had nothing better to do but electronics,” he admitted. But those toys were a training ground. They taught him the limits of early SoCs and showed him just how much work it would take to push RISC-V toward mainstream use. As the world turned inward, so too Liang—redirecting his career toward the open hardware movement RISC-V now symbolized.

From the beginning, Liang leaned on pioneers like SiFive and Andes Technology. Their CPU cores — SiFive’s U74 (RV64GC_Zba, Zbb)—Zba Address Generation, Zbb Basic bit manipulation—and Andes’s 7nm QiLai family — gave DeepComputing the building blocks for something more ambitious than toys. “None of our SoC manufacturers knew where to go,” he quipped at Embedded World 2025. “They didn’t know what nanometer, what compute power, how many TOPS, how much DDR.”  His message to the audience: don’t wait for perfect specs — ship something.

Where others saw uncertainty, Liang saw opportunity. He wasn’t going to out-engineer Intel or ARM. But he could take existing IP and push it into places no one else dared — consumer laptops, where expectations were unforgiving and excuses ran out fast. Liang chuckled as he recalled the first RISC-V laptop, Roma: “I made 200 of them. Two hundred crazy guys paid me — $5,000 each. But I still lost money,” he declared at FOSDEM (Free and Open-Source Developers’ European Meeting) 2025—one of the world’s largest gatherings of open-source enthusiasts, developers, and technologists in Brussels. Roma was no commercial success, but it was proof. People wanted to touch RISC-V, not just read about it in white papers. They wanted to hold it, break it, fix it — exactly what Liang had promised. And it gave him credibility: he was no longer just another RISC-V enthusiast waving slides. He had hardware in the wild, and that mattered.

The Twitter Pitch

The real breakthrough came not at a conference, but on Twitter. Liang reached out cold to Framework. Nirav Patel founded San Francisco-based Framework Computer Inc., in 2020 to redefine the laptop industry by building computers users can upgrade, fix, and customize—empowering ownership rather than planned obsolescence. Its flagship product, the Framework Laptop 13, earned acclaim for its open ecosystem and DIY-friendly design, while the newer Laptop 16 expanded into high-performance territory with swappable input modules and GPU upgrades. Framework isn’t just selling laptops—it’s selling a movement. It was exactly the partner Liang needed to distribute his RISC-V laptop.

“I pinged them on Twitter. I begged them — you do the shell; I’ll do the motherboard. Why not? We are open,” he exclaimed on the FOSDEM 2025 stage. It was audacious — a scrappy RISC-V builder pitching a darling of the repair-friendly laptop scene. But it worked. Framework agreed. DeepComputing would focus on motherboards; Framework would provide the shells, distribution, and community. At RISC-V Taipei Day 2025, Liang turned this into a rallying cry: “Don’t throw away the case. Throw away the motherboard. You can throw x86 away, you can throw ARM away, change it to RISC-V. How good is that? No more excuses.

Liang described his method as a ‘Lego’ approach: modular, imperfect, iterative. “I don’t care how crap the hardware is,” he decried. “Make it into a product, open it up, give it to the open-source community. Twenty million developers will help you optimize it,” Liang exhorted at Embedded World 2025. By treating laptops like Lego kits — cases here, chiplets there, swappable boards everywhere — he created a system where failure wasn’t fatal. If a design fell short, you didn’t scrap the whole thing. You just swapped in another board.

AI the Killer App

Just as the Homebrew Computer Club gave early PC hobbyists a place to swap ideas in the 1970s, new online communities are coalescing around local AI. Reddit forums like r/LocalLLaMA and r/ollama, Discord servers for llama.cpp and Ollama, and Hugging Face discussion threads have become the meeting halls where enthusiasts trade benchmarks, quantization tricks, and new use cases. These are incubators of a culture that treats local AI inference the way early hobbyists treated microcomputers: as a frontier to be explored, shared, and expanded together.

Liang sees the same cultural energy, but knows that without upstreaming, RISC-V risks falling out of sync with every new kernel release. DeepComputing, he realized, couldn’t remain a boutique shop forever. “Once we hit 10,000 units, we break even,” he declared at OSS Europe 2025. That became his new target: scale beyond early adopters, reach students, reach universities. He launched a sponsorship program—free IP, free SoCs, free boards—for schools willing to teach RISC-V. “Help me move bricks,” he implored. “Otherwise, we’re all dead.”

Scaling the Software Cliff

By 2025, DeepComputing was rolling out boards with four, eight, even 32 cores—some targeting up to 50 TOPS of AI acceleration. Hardware was moving fast. But at OSS Europe 2025, Liang admitted the real bottleneck wasn’t silicon. “It’s not a hardware issue. It’s a software issue. Even Chromium doesn’t have a RISC-V build on their Continuous Integration (CI). Without upstream, who’s going to maintain it?” he asked.

Chromium became his case in point. Beneath its familiar interface lies a labyrinth of dependencies: hundreds of handwritten assembly libraries tuned for x86 and ARM, and a build system that challenges even seasoned developers. For most users, this complexity is invisible. But for anyone bringing up a new instruction set like RISC-V, Chromium is a gatekeeper. Without native support, a RISC-V laptop can’t run a modern browser—no tabs, no JavaScript, no YouTube, no GitHub. What looks like a coding detail becomes a usability cliff.

That’s why “Chromium out of the box” isn’t a luxury—it’s a litmus test. To move beyond dev boards into mainstream PCs, RISC-V must pass through the crucible of Chromium. And that means more than just compiling the browser: it means pulling in optimized libraries, build scripts, and platform assumptions. In short: no Chromium, no desktop.

Progress has been fragile but real. Greg Kroah-Hartman, the Linux maintainer, once sent Liang a video with a simple lesson: always upstream early, even from FPGA prototypes. Liang took it to heart. “Otherwise, you wait nine months, and by the time you reach market, your kernel is already out of date,” he said.

The effort to bring Chromium and CI to RISC-V is no longer theoretical—it’s underway, though unfinished. Community and vendor teams, including Alibaba’s Xuantie group, now have Chromium builds running on RISC-V hardware, with active work to optimize the V8 engine and UI responsiveness. Developers can already cross-compile, and repositories host functional ports. But upstream integration is still marked “In Progress” in Chromium’s tracker. That leaves RISC-V vulnerable to regressions, and performance still lags behind x86 and ARM—with slow rendering and video stutter.

This is where RISE, the industry-backed non-profit, comes in. Supported by Google, Intel, NVIDIA, Qualcomm, and others, RISE’s mandate is to make sure RISC-V isn’t treated as an afterthought in critical software stacks. By funding CI integration and pushing upstream support for projects like Chromium, Linux, and LLVM, RISE is trying to turn Liang’s fragile progress into something permanent. The vision is simple: once RISC-V is tested every day in the same CI loops as x86 and ARM, it stops being a science project and starts being “just another architecture” — ready for real desktops.

Through it all, Framework has been the constant—the partner that turned Liang’s persistence into something more than a COVID-era project. “We’ve been open, we’ve been slow, but we keep going. Hope lets us work harder,” Liang said. For the RISC-V movement that has spent fifteen years chasing its breakthrough, he offers something rare: momentum.

Also Read:

SiFive Launches Second-Generation Intelligence Family of RISC-V Cores

Beyond Von Neumann: Toward a Unified Deterministic Architecture

Beyond Traditional OOO: A Time-Based, Slice-Based Approach to High-Performance RISC-V CPUs

Basilisk at Hot Chips 2025 Presented Ominous Challenge to IP/EDA Status Quo


Arm Lumex Pushes Further into Standalone GenAI on Mobile

Arm Lumex Pushes Further into Standalone GenAI on Mobile
by Bernard Murphy on 09-24-2025 at 6:00 am

chatbot on Lumex min

When I first heard about GenAI on mobile platforms – from Arm, Qualcomm and others – I confess I was skeptical. Surely there wouldn’t be enough capacity or performance to deliver more than a proof of concept? But Arm, and I’m sure others, have been working hard to demonstrate this is more than a party trick. It doesn’t hurt that foundation models have also been slimming down to a few billion parameters so that now it looks very practical to host meaningful chatbots and even agentic AI on a phone, running standalone on the phone without need for cloud access. Arm have announced their new Lumex platform in support of this trend which may turn me into a believer. What I find striking is that GenAI is hosted on the CPU cluster with no need for GPU or NPU support.

Why should we care?

The original theory on mobile and AI was that the mobile device would package up a request, ship it to the cloud, the cloud would do the AI heavy lifting and then ship the response back to the mobile device. That theory fell apart for a litany of reasons. Acceptable performance depends on reliable and robust wireless connections, not always certain especially when traveling. Shipping data back and forth introduces potential security risks and certainly privacy concerns. The inherent latency in connections with the cloud makes real-time interaction impractical, undermining many potentially appealing use cases like chatbot apps. Some mobile apps must support quick on-device learning to refine behavior to user preferences. Finally, neither mobile app developers nor their users want to add a cloud subscription on top of their app subscription.

There may still be cases where cloud-based AI will be a useful complement to mobile, but the general mood now leans to optimizing the on-device experience as much as possible.

Arm Lumex, a new generation platform for on-device AI

All good reasons to make AI native on the phone, but how can this be effective? Arm has gone all-in to make the experience real with their newly announced Lumex platform, emphasizing the CPU cluster as the centerpiece of AI acceleration. I’ll come back to that.

Briefly, Lumex introduces new CPU cores (branded C1-Ultra, C1-Premium and C1-Pro) and a GPU core (branded G1-Ultra), with the expected performance advances on a new release, together with a CSS philosophy of complete subsystems extending to chiplets, 3nm-ready, all supported by a software stack and their ecosystem to support fast time to market deployment.

It’s the CPU cores that particularly interest me. Arm is boasting these systems can run meaningful GenAI apps without needing to share the load with the Mali GPU or an NPU. They accomplish this with SME, their scalable matrix extension, now adding a new generation in SME2. This claim is backed up by endorsements from the Android development group, the AI partnerships group at Meta and the client engineering group at AliPay.

Benchmarking shows nearly 5X improvement in latency in speech recognition, nearly 5X encode rate for Gemma (same family as Google Gemini) and nearly 3X faster generation time for Stable Audio (from the same people who brought you image generation).

Why not add further acceleration by folding in GPU and NPUs? Geraint North (Fellow, AI and Developer Platforms at Arm) made some interesting points here. GPU and NPU cores may be faster standalone at handling some aspects of a model, but only for data types and operations within their scope. CPUs on the other hand can handle anything. Another downside to a mixed engine solution is that moving data between engines (e.g. CPU/GPU) incurs overhead no matter how well you optimize, whereas a CPU cluster is already highly optimized for minimal latency.

The final nail in the mixed engine coffin is in aligning with what millions of app developers want. They start their work on CPU, naturally designing and optimizing to that target. Adding in considerations for GPU and NPU accelerator cores is pretty alien to how they think. For maximum business opportunity they also need to support a wide range of phones, some of which may have GPU/NPU cores, and some may not. An implementation based on purely on the CPU cluster keep their plans simple since the CPUs can handle all data types and operations. Kleidi-based libraries simplify development further by making use of SME/SME2 acceleration transparent.

Maybe a highly targeted implementation for one platform could get higher AI performance using the GPU but it wouldn’t be this scalable. Or developer friendly. Lumex offers a simpler development and deployment use-model: GenAI workloads on-device across many phone types without needing to go to the cloud. Very interesting.


Soitec’s “Engineering the Future” Event at Semicon West 2025

Soitec’s “Engineering the Future” Event at Semicon West 2025
by Daniel Nenni on 09-23-2025 at 10:00 am

Image Soitec event Semicon West

As part of the broader Semicon West ecosystem in Phoenix, Arizona, Soitec, a global leader in engineered substrates for semiconductors, hosts an exclusive, invitation-style event titled Engineering the Future: Soitec Substrates Powering Technology Megatrends on Wednesday, October 8, 2025, from 2:30 PM to 6:00 PM MST.

Held at the Residence Inn by Marriott Phoenix Downtown (132 South Central Avenue, Phoenix, AZ 85004), this free, in-person gathering targets a select audience of about 50 semiconductor professionals, analysts, and investors. Spanning 3.5 hours, it blends presentations, expert panels, and networking to spotlight how Soitec’s innovative substrates are addressing critical industry challenges amid megatrends like 5G/6G connectivity, AI proliferation, and data center expansion.

The event underscores Soitec’s pivotal role in substrate engineering, which forms the foundational “canvas” for advanced chips. These materials enable higher performance, lower power consumption, and smaller form factors in next-gen devices. With the semiconductor sector facing supply chain strains, geopolitical tensions, and escalating demands for efficiency, Soitec positions its solutions—such as silicon-on-insulator (SOI) and strained silicon—as enablers for sustainable innovation. Expect deep dives into real-world applications, backed by data on market growth projections (e.g., RF markets exceeding $20B by 2030) and case studies from partners like Qualcomm and IBM.

Detailed Agenda
  • 2:30 PM – 3:00 PM: Welcome & Greetings Kick off with introductory remarks, setting the stage for Soitec’s vision in a rapidly evolving landscape. This casual opener fosters early connections among attendees.
  • 3:00 PM – 3:45 PM: Soitec Executive Insights A high-level presentation and panel featuring members of Soitec’s Executive Committee, led by CEO Paul B. Barnabé. The session introduces key technology megatrends, including the shift toward heterogeneous integration and energy-efficient computing. Barnabé, a veteran in the field, will likely highlight Soitec’s R&D investments (over €300M annually) and recent milestones, such as advancements in 300mm wafer production for AI accelerators.
  • 3:45 PM – 5:00 PM: Industry Deep Dives & Panels The core of the event: Three focused sessions with market specialists, each 30-40 minutes followed by Q&A. Topics align with high-growth areas:
    1. RF Technologies for Smartphones: Exploring substrates for next-gen filters (e.g., BAW/TC-SAW) that boost 5G/6G signal integrity, reduce interference, and support mmWave bands. Panelists may discuss Qualcomm’s integration challenges and the $15B+ RF market.
    2. Optical Interconnects in Data Centers: Addressing photonics-enabled substrates for faster, low-latency links amid AI-driven data explosion. Expect talks on silicon photonics reducing power by 50% versus copper, with insights from hyperscalers like Google.
    3. Technologies for Edge AI Devices: Focusing on substrates optimizing on-device inference for wearables, drones, and IoT—tying into themes like ultra-low power (sub-1V operation) and thermal management. This resonates with the Edge AI surge, projected to hit $100B by 2028. A culminating panel synthesizes cross-topic synergies, debating supply chain resilience and U.S. CHIPS Act implications.
  • 5:00 PM – 6:00 PM: Networking Reception Wind down with appetizers and drinks, providing ample time for one-on-one discussions. This informal segment is ideal for forging partnerships, with Soitec execs circulating to address investor queries.

Organized under Soitec’s banner (a company with 4,000+ employees and €1.2B+ revenue in FY2024), the event emphasizes actionable insights over hype. It’s not just a talk shop—attendees gain foresight into how substrates will underpin $1T+ in semiconductor value by 2030, per McKinsey estimates.

For registration, head to the Eventbrite Registration Page; spots are limited, so early RSVP is advised. Whether you’re tracking fab investments or scouting RF/AI plays, this is a prime opportunity to engage with substrate innovators shaping tomorrow’s tech stack.

About Soitec

Soitec (Euronext – Tech Leaders), a world leader in innovative semiconductor materials, has been developing cutting-edge products delivering both technological performance and energy efficiency for over 30 years. From its global headquarters in France, Soitec is expanding internationally with its unique solutions, and generated sales of 0.9 billion Euros in fiscal year 2024-2025. Soitec occupies a key position in the semiconductor value chain, serving three main strategic markets: Mobile Communications, Automotive and Industrial, and Edge and Cloud AI. The company relies on the talent and diversity of more than 2,200 employees, representing 50 different nationalities, working at its sites in Europe, the United States and Asia. Nearly 4,300 patents have been registered by Soitec.

Also Read:

How FD-SOI Powers the Future of AI in Automobiles

Powering the Future: How Engineered Substrates and Material Innovation Drive the Semiconductor Revolution

Soitec: Materializing Future Innovations in Semiconductors


The Impact of AI on Semiconductor Startups

The Impact of AI on Semiconductor Startups
by Kalar Rajendiran on 09-23-2025 at 6:00 am

AI Infra Summit 2025 Banner

At the AI Infra Summit 2025 was a panel conversation that captured the semiconductor industry’s anxieties and hopes. The session, titled “The Impact of AI on Semiconductor Startups,” examined how artificial intelligence is transforming not just what chips can do, but how we design them.

The backdrop is stark. Developing a leading-edge chip can take three to five years and cost over $100 million, even as the industry faces a projected shortage of one million skilled workers by 2030. Startups, without the vast data sets and large scale engineering teams of well established companies, face an especially steep climb. Could AI truly level the playing field?

Moderator Sally Ward-Foxton, senior reporter at EE Times, put that question to a well-represented panel: Laura Swan, General Partner at Silicon Catalyst Ventures; Arun Venkatachar, Vice President of AI & Central Engineering at Synopsys; and Stelios Diamantidis, Chief Product Officer of CogniChip—an investor, a market leader, and a promising startup, respectively. Over the next 30 minutes, they painted a vivid picture of how AI is accelerating chip development, lowering barriers to entry, and expanding who can participate in the next era of hardware innovation.

Startups, Speed, and the Need for Both Giants and Upstarts

Sally opened with a sobering statistic: U.S. venture capital once funded nearly 200 semiconductor startups each year, but by 2010 that number had fallen to single digits. “Even if you have a brilliant idea and a committed team, you’re looking at three to five years from concept to product,” said Stelios. “Meanwhile, an AI application can scale to millions of users overnight. Investors compare those timelines and often decide hardware is too slow and too risky for a return on their investment.”

Yet, as Laura emphasized, startups remain indispensable. “Innovative ideas, early funding, and sheer speed of execution are the lifeblood of progress,” she said. Laura explained that Silicon Catalyst—a hybrid incubator, accelerator, and venture fund—holds a unique position in nurturing these young companies. “As much as startups can be the bane of established players, the industry needs both,” she added. Healthy competition depends on the creative spark of startups and the scale, resources, and stability of established companies. One cannot thrive without the other.

AI Inside the Design Flow

Arun described how Synopsys began introducing machine learning into its design tools almost a decade ago. “We started replacing decades-old heuristics with AI,” he said. “Today those algorithms optimize power, performance, and area, accelerate verification, and even shorten manufacturing test cycles. In some flows we’ve cut design times by up to 40 percent.”

This is not a minor efficiency tweak. Stelios sees it as an inflection point akin to the arrival of logic synthesis in the 1980s. By connecting architecture, design, verification, and manufacturing into a continuous AI-assisted process, productivity gains can cascade across the entire chip-development cycle.

Cloud as the Great Equalizer

A recurring theme was how cloud-based design amplifies AI’s impact. Instead of buying racks of servers and expensive perpetual EDA licenses, a startup can now log in from a laptop and rent state-of-the-art tools on demand. Stelios and Arun were in agreement on this. “I know the moderator would love for us to disagree,” Stelios said with a grin, “but we’re on exactly the same page. Cloud-based design is essential if we want a healthier semiconductor ecosystem.”

By pushing sophisticated design environments to the cloud, companies can share resources, scale compute power instantly, and give even small teams access to capabilities once reserved for the largest players.

Human Ingenuity Still Matters

Despite all the talk of automation, no one on the panel predicted the death of engineering talent. “AI can remove drudgery and reduce errors,” said Laura, “but human creativity and architectural insight remain essential.”

Stelios invoked an evocative metaphor from his former employer Synopsys’ founder Aart de Geus, comparing great chip architects to the master builders of Europe’s cathedrals—people who understood the properties of every material and could see the entire structure from conception to completion. AI, he argued, will augment that holistic thinking rather than replace it.

Toward “Chips as a Service”

“What if building a chip were as easy as launching an app?” Sally asked the panel. If AI and cloud computing continue their rapid advance, the semiconductor world might soon resemble modern software development.

Laura offered a memorable quip: “We might eventually have something like a TSMC vending machine—not literally, of course, but a world where you feed in an idea, run it through automated flows, and pop out a prototype ready for market testing.”

The joke underscored a serious point. Faster, cheaper design cycles could entice investors back to hardware and open the door for entrepreneurs who today would never consider starting a chip company.

Summary

The AI Infra Summit panel delivered a clear message: artificial intelligence is reshaping semiconductor design from the ground up. AI-driven tools are compressing design and verification times, while cloud platforms are democratizing access to world-class design environments so that a small startup can compete with giants. At the same time, a healthy ecosystem depends on the coexistence of nimble startups and established companies—the former driving innovation and speed, the latter providing scale and resources. Human engineers remain central, guiding system-level decisions and bringing creative architecture to life.

Taken together, these forces could shrink chip-development timelines from years to mere months, making semiconductor ventures far more attractive to investors and far more accessible to entrepreneurs. Whether or not we ever see a “TSMC vending machine,” the vision is unmistakable: a future in which creating custom silicon is as agile, collaborative, and entrepreneurial as writing software—ushering in a true hardware renaissance.

Also Read:

CEO Interview with Andrew Skafel of Edgewater Wireless

Podcast EP278: Details of This Year’s Semiconductor Startups Contest with Silicon Catalyst’s Nick Kepler

Cutting Through the Fog: Hype versus Reality in Emerging Technologies


Yes Intel Should Go Private

Yes Intel Should Go Private
by Daniel Nenni on 09-22-2025 at 10:00 am

Lip Bu Tan Intel

Lip-Bu Tan started as Intel CEO on March 18th of this year and some very impressive changes have already taken place. Intel started the year with more than 100,000 employees and will finish the year with around 75,000. Reporting structures have been flattened and the Intel culture is being transformed back into an innovation driven semiconductor manufacturing company.

The most impressive transformation however has been on the financial side. Both Softbank and Nvidia have invested $7B and the US Government made a 10% $8.9B equity investment. The biggest value here, in my opinion, is the trust placed in Lip-Bu Tan, absolutely.

Posted on Truth Social by @realDonaldTrump 9/19/2025

What will Lip-Bu Tan do next?

Will there be more billion dollar investments? Yes, I think there could be. Will big customers do business with Intel Foundry. Yes, I think they will. In fact, I know they will but I totally respect Lip-Bu’s promise to keep wafer agreement negotiations private until the ink is dry so that is all I will say about that. And for those analysts who keep asking that question I would suggest they do the same, respect Lip-Bu Tan.

The latest Intel question that is running through the media:  Should Intel go fully private via a government-led buyout of public shares, potentially with private equity or consortium partners? This would remove Intel from public markets freeing it from quarterly reporting pressures and allowing bold, long-term moves. Based on Intel’s history, current challenges, and ecosystem chatter, yes, I think Intel should go private. Below, I’ll try to break down the pros, cons, and a possible path forward. Help me out in the comments.

Pros and Cons of Privatization for Intel

Privatization isn’t a silver bullet, but it aligns with Intel’s need for more changes in order to stay on the leading edge of semiconductor manufacturing. Here’s a balanced comparison drawing from recent developments:

Aspect Pros of Going Private Cons of Going Private
Strategic Flexibility Frees management from short-term Wall Street demands, enabling focus on long-term R&D and a full breakup into specialized units (foundry, design, Mobileye, Altera, Intel Capital, etc…). Experts argue this could create more value than the “conglomerate” model. Risk of bureaucratic inertia if government influence dominates; state-owned enterprises often prioritize politics over innovation, as seen in global examples like China’s SMIC.
Financial Stability Access to patient capital (government / consortium) without dilution from public offerings, could fund $20B+ Ohio fabs without bankruptcy fears, privatization could unlock more value. High buyout cost burdens taxpayers, past bailouts have had mixed returns (Solar/Auto). Recent stock surges suggest public markets still value upside.
National Security & Competition Bolsters U.S. chip independence amid China tensions; a private Intel foundry could serve the top fabless companies without conflicts, reducing reliance on Taiwan. Distorts markets by favoring Intel politically, harming competitors like AMD, possible foreign retaliation.
Talent & Operations Long-term focus could stem talent drain, private status might attract top engineers with equity incentives tied to recovery. Government oversight risk, eroding private-sector confidence.
Shareholder Value Breakup could unlock billions in value, Softbank and Nvidia’s stake signals private interest in AI collaboration. Public investors lose liquidity if breakup fails, possible value destruction.

Overall, my pros outweigh cons if privatization is temporary and mission-driven. Intel’s vertical integration, once a strength, now drags it down as design and manufacturing compete for resources. Public status amplifies scrutiny on past failures but privatization could mimic Dell’s 2013 turnaround, where Michael Dell and Silver Lake took it private to refocus and list again at a much higher value.

Why Now? The Tipping Point in 2025
  • Government’s Foot in the Door: The 10% stake (no board seats, but voting alignment) blurs public-private lines. Given the hostile political environment, Intel is at risk of becoming a political football when there is an administration change.
  • Market Signals: Softbank and Nvidia’s investment isn’t charity, it’s a strategic bet on AI collaboration and as I said, there could be more $5B investments.
  • Global Context: With TSMC’s limited US manufacturing and Samsung failing on the leading edge, the U.S. can’t afford Intel’s collapse. Privatization could create a “pure-play” U.S. foundry, echoing GE’s 2021-2024 breakup success with parts of GE now trading at premiums (GE Aerospace, GE HealthCare, and GE Vernova).

Bottom line: Intel’s survival demands escaping public-market and political quicksand. Privatization isn’t “handing over to China” (as some fear) but a U.S.-centric reset to reclaim leadership. Without it, Intel fades into the background, with it, Intel could power the next AI or quantum computing boom, absolutely.

Also Read:

AI Revives Chipmaking as Tech’s Core Engine

Advancing Semiconductor Design: Intel’s Foveros 2.5D Packaging Technology

Revolutionizing Processor Design: Intel’s Software Defined Super Cores


MediaTek Dimensity 9500 Unleashes Best-in-Class Performance, AI Experiences, and Power Efficiency for the Next Generation of Mobile Devices

MediaTek Dimensity 9500 Unleashes Best-in-Class Performance, AI Experiences, and Power Efficiency for the Next Generation of Mobile Devices
by Daniel Nenni on 09-22-2025 at 6:00 am

MediaTek Dimensity9500 EN Transparent

In a bold move to dominate the premium mobile chipset market, MediaTek unveiled the Dimensity 9500 on September 22, 2025, from Shenzhen, China. This flagship SoC promises to elevate 5G smartphones with unparalleled performance, on-device AI capabilities, and energy efficiency, positioning MediaTek as the undisputed leader in gaming, compute, imaging, and artificial intelligence. As the world’s top supplier of smartphone SoCs, MediaTek’s latest innovation arrives at a pivotal moment when consumers demand devices that are not just powerful but intelligently adaptive to daily life.

Manufactured using the TSMC N3P (enhanced performance) process, the heart of the Dimensity 9500 lies a third-generation All Big Core CPU architecture, featuring a blazing-fast 4.21GHz ultra core, three premium cores, and four performance cores. Paired with four-lane UFS 4.1 storage, it delivers a staggering 32% uplift in single-core performance and 17% in multi-core tasks over its predecessor. Yet, the real magic is in efficiency: the ultra core slashes power consumption by up to 55% at peak loads, ensuring longer battery life without compromising speed.

“As AI becomes part of everyday life, consumers want devices that feel smarter, faster, and more personal without sacrificing battery life,” said JC Hsu, corporate senior vice president at MediaTek and general manager of the Wireless Communications Business Unit. “The MediaTek Dimensity 9500 delivers exactly that: Breakthrough on-device AI, top-tier performance and efficiency, and a full suite of premium experiences that our partners can bring to users around the world.”

Enhancing this prowess is a revamped cache and memory system, including the industry’s first 4-channel UFS 4.1 support. This doubles read/write speeds and accelerates large AI model loading by 40%, while the second-generation Dimensity scheduler ensures seamless responsiveness under heavy multitasking. Gamers will rejoice with the integration of the Arm G1-Ultra GPU, boasting 33% higher peak performance and 42% better power efficiency. It introduces double frame rate interpolation up to 120FPS, enabling console-level ray tracing. Through collaborations with top studios, the chipset supports MegaLights in Unreal Engine 5.6 and Nanite in Unreal Engine 5.5, unlocking AAA real-time rendering and immersive lighting for mobile titles.

AI takes center stage with the ninth-generation MediaTek NPU 990, powered by Generative AI Engine 2.0. This doubles compute power and pioneers BitNet 1.58-bit large model processing, cutting energy use by 33%. The ultra-efficient NPU boasts over 56% less power draw at peak performance, facilitating 100% faster output from 3-billion-parameter LLMs, 128K token long-text processing, and the world’s first 4K ultra-high-definition image generation. The result? A truly “agentic” AI user experience—proactive, personalized, collaborative, evolving, and secure—that anticipates user needs in real time.

The Dimensity 9500 is the first to support an integrated compute-in-memory architecture for its newly-added Super Efficient NPU, significantly reducing power consumption and enabling AI models to run continuously. This advancement further enhances end-user experiences with more sophisticated proactive AI.

Imaging enthusiasts aren’t left behind. The Imagiq 1190 ISP handles RAW-domain pre-processing, up to 200MP capture, 30fps continuous focus tracking and new portrait engine, while supporting cinematic 4K 60FPS portrait videos. It offers offers the latest MiraVision Adaptive Display technology, which dynamically adjusts contrast and color saturation based on ambient lighting, panel characteristics, and real-time content analysis. This ensures a clear viewing experience both outdoors in high-brightness scenarios — without overheating during prolonged use — and indoors in extremely dark environments, providing eye protection while maintaining clarity.”

Connectivity shines too, with MiraVision for adaptive displays, Bluetooth calls, Wi-Fi fast transfer, and multi-network intelligence for uninterrupted 5G/Wi-Fi handoffs. AI-driven communication tech reduces 5G power by 10% and Wi-Fi by 20%, with 5CC carrier aggregation boosting bandwidth 15%. Plus, AI positioning and network selection yield 20% higher accuracy and 50% lower latency than rivals.

MediaTek’s Dimensity 9500 stems from years of R&D and ecosystem partnerships with game studios, OEMs, and software giants. Flagship devices powered by this chipset are slated for Q4 2025 launches, promising to flood the market with smarter, greener flagships. For more on MediaTek’s 5G lineup, visit i.mediatek.com/mediatek-5g.

Press release URL

Also Read:

MediaTek Develops Chip Utilizing TSMC’s 2nm Process, Achieving Milestones in Performance and Power Efficiency

TSMC’s 2024 Sustainability Report: Pioneering a Greener Semiconductor Future

TSMC 2025 Update: Riding the AI Wave Amid Global Expansion