Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/everyone-deserves-a-better-computer-aheadcomputing-creates-compelling-risc-v-core-ip.20892/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021770
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Everyone deserves a better computer, AheadComputing creates compelling RISC-V core IP

hist78

Well-known member
Co-Founder, CEO, & President
Dr. Debbie Marr
Debbie Marr was an Intel Fellow and Chief Architect of the Advanced Architecture Development Group (AADG) at Intel, where she led development of a CPU core to bring leadership performance and perf/watt to Intel’s future platforms.

Prior to her current role, Debbie’s 33 years at Intel impacted both product and research. Debbie played leading roles on Intel CPU products from the 386SL to Intel’s current leading-edge products. Debbie was the server architect of Intel® Pentium™ Pro, Intel’s first Xeon Processor. She brought Intel Hyperthreading Technology from concept to product on the Pentium 4 Processor. She was the chief architect of the 4th Generation Intel Core™ (Haswell) and led advanced development for Intel’s 2017/2018 Core/Xeon CPUs. Debbie also spent 7 years in Intel Labs as the Director of Accelerator Architecture Lab where she led research in machine learning and acceleration techniques for CPU, GPU, FPGA, and AI Accelerators. Debbie has authored over 40 patents in many aspects of CPU, AI accelerators, and FPGA architecture/microarchitecture.

Debbie has a PhD in electrical and computer engineering from University of Michigan, an MS in electrical engineering and computer science from Cornell University, and a BS in electrical engineering and computer science from the University of California, Berkeley.

******

Co-Founder
Jonathan Pearce
Jonathan Pearce was an Intel Principal Engineer, CPU Architect and a key technologist & strategist in the Advanced Architecture Development Group, encompassing power, performance, area, instruction set architecture, and microarchitecture concepts.

Jonathan contributed to both products and research during his 22 years at Intel. In product groups at Intel, Jonathan has worked in both pre-silicon and post-silicon roles on multiple generations of Intel Core™ SOCs. As an Intel research scientist, he was the liaison to the Intel Collaborative Research Institute for Computational Intelligence. He also led a proof-of-concept research project of a novel microprocessor architecture with breakthrough performance for AI/ML/HPC algorithms. Jonathan has authored over 19 patents in CPU, AI, and GPU architecture/microarchitecture.

Jonathan has an MS and a BS in electrical and computer engineering from Carnegie Mellon University.

*****

Co-Founder
Dr. Srikanth Srinivasan
Dr. Srikanth Srinivasan (Sri) is an industry expert on microprocessor architecture and microarchitecture with over 20 years of technical leadership experience in product R&D. At Intel, he has successfully taped out several high performance chips (Nehalem, Haswell, Broadwell) used in client & server markets, as well as low-power chips (Bergenfield) used in phones & tablets. Most recently, Sri led the frontend and backend CPU teams at the Advanced Architecture Development Group in defining a novel microarchitecture that pushed the frontiers of processor performance. He has also worked on accelerators and computing-in-memory for AI. Sri is also a prolific author with more than a dozen highly cited papers and over 50 patents. His papers were featured in IEEE Micro Top Picks in 2003, 2004 and 2006. Sri has a PhD in Computer Science from Duke University and a BE (Honors) in Computer Science from BITS Pilani.

*****
Co-Founder
Mark Dechene
Mark Dechene was an Intel Principal Engineer and CPU Architect in the Advanced Architecture Development Group, where he led the Memory Execution Architecture team within the CPU core. In his 16 years at Intel, Mark has worked on architecture development for Intel CPU products including Haswell, Broadwell, Goldmont, Goldmont Plus, Tremont, and Skymont. Throughout his career, Mark has focused on driving core product architecture teams to deliver leadership CPU performance. Mark previously worked on product development at Motorola, in both Automotive Telematics and Cellular Telephone groups. Mark has authored over 15 patents, focused on microprocessor performance.

Mark holds a MS in Computer Architecture from North Carolina State University (NCSU), and a BS in Electrical and Computer Engineering from Marquette University.

******

Source: https://www.aheadcomputing.com/team
 
"As everyone knows Intel is in a HUGE cost savings time period. Need to save $10b by EOY. This got cut because of design team simplification. We have three cpu design teams, P, E and Royal. Personally I would have loved to see where this went."


I think it is consistent with the "bloated org" comment by Lip-Bu and the market demand to put more resources on GPU/accelerator designs.
 
I think it is consistent with the "bloated org" comment by Lip-Bu and the market demand to put more resources on GPU/accelerator designs.
The same rumor from Reuters discussed here on SemiWiki said "Tan has told people he believed Intel was overrun by bureaucratic layers of middle managers who impeded progress at Intel’s server and desktop chips divisions and the cuts should have focused on these people."

There Y.H posited their general uselessness, except "If anything, they serve as a scapegoat for higher management to shift the blame of any failure to them." But I don't hear that either group is being targeted, which from the beginning has been my metric for Gelsinger's likelihood of success.

For these and other Intel alumni, we're told people back to the 386 days left the company rather than switch from the CPUs they knew to GPUs. How long in the CPU design process does it take to start costing real money? We know what happens when a CPU design team doesn't do well, multiple teams would be insurance against that, if technical merit plays a role in what Intel decides to make into products.
 
I think canning Royal Core is a big mistake. It had the potential to replace both P and E cores. With a unified core design then Intel could have gone back to its tick-tock cadence.
I agree that they should have just cut middle management.
 
I think they merged the teams. Regarding pruning middle management, it might be due to ordering as it does not make sense not to trim middle layers once the number of employees is reduced.

I think GPU/accelerator development is more important than CPU development at the moment. For example, using copilot to assist code writing, probably a lot of compute is executed on the NPU and GPU. The role of the CPU is orchestrating and also does compiling. It is a very important differentiator when making a laptop purchase in the future. For datacentres it is even more so.
 
I think GPU/accelerator development is more important than CPU development at the moment.
Is there any reason to believe Intel will suddenly get competence in this field after years of failure? This is not like fabs where years ago I was worried that after 10nm it wouldn't regain its decades long competence.

For the first step of training in AI, Nvidia has a commanding technical position due to two decades and billions of dollars investing in a complete ecosystem. Right now its big limit is TSMC capacity reservations, like Intel and everyone else it failed to predict the current AI boom or bubble which is soaking up datacenter budget Intel was banking on. If this is not a bubble, Nvidia's production should someday catch up with demand, which would be the latest Intel's window will close.
For example, using copilot to assist code writing, probably a lot of compute is executed on the NPU and GPU. The role of the CPU is orchestrating and also does compiling. It is a very important differentiator when making a laptop purchase in the future. For datacentres it is even more so.
The second step of inference, to the extent it can be done at the edge, is one where Intel has a better chance in competing, and it's natural to include in a CPU. But that's probably also long term and maybe chicken and egg, for people to develop for an installed base vs. continuing to do it back in the datacenter.
 
They need to balance their funding to various projects.

I have a MTL laptop and I tried to to run local LLM models on my laptop. The GPU path on the iGPU (Xe) is much faster than the rest. You can already do it on the edge. Lunar Lake should be more capable than MTL. I think modern CPUs are already quite capable. I would rather prefer the Lunar Lake arrangement, lower power + powerful GPU/NPU.
 
I think they merged the teams. Regarding pruning middle management, it might be due to ordering as it does not make sense not to trim middle layers once the number of employees is reduced.

I think GPU/accelerator development is more important than CPU development at the moment. For example, using copilot to assist code writing, probably a lot of compute is executed on the NPU and GPU. The role of the CPU is orchestrating and also does compiling. It is a very important differentiator when making a laptop purchase in the future. For datacentres it is even more so.
GitHub Copilot is running on the cloud, not locally. It is using ChatGPT, which is too big for local execution.
 
GitHub Copilot is running on the cloud, not locally. It is using ChatGPT, which is too big for local execution.
I thought about it. You can host Llama 3.1 on private clouds (Xeon+Gaudi) and point llm requests to those instances. For most usages, local llms (running on AI PCs) should be sufficient:
 
Back
Top