Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/intel-is-said-to-have-made-a-2b-takeover-offer-for-chipmaker-sifive.14331/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2020570
            [XFI] => 1050070
        )

    [wordpress] => /var/www/html
)

Intel is said to have made a $2B takeover offer for chipmaker SiFive?!?!?

Daniel Nenni

Admin
Staff member
Intel is said to have offered to purchase SiFive for more than $2B. SiFive, a designer of semiconductors, has been talking to its advisors to see how to proceed, according to a Bloomberg report, which cited people familiar. SiFive has received multiple bids from other interested parties and has also received offers for an investment. SiFive last raised more than $60M in a Series E financing round last year and was valued at about $500M, according to PitchBook. In June 2019, Qualcomm (NASDAQ:QCOM) participated in a $65.4M Series D round for SiFive, a fabless semiconductor company building customized silicon based on the open RISC-V instruction set architecture.

Wow, great move if it is true. If Intel wants to get into the foundry business doing turnkey ASICs is definitely the way to go. Intel already acquired eASIC. That way Intel can closely control and protect IP and make sure designs/chips are done the Intel way, absolutely.

The ASIC business has changed quite a over the last couple of years as fabless chip companies take control (Marvell, Broadcom, and Mediatek). Exciting times in the semiconductor ecosystem, absolutely!

 

kvas

New member
Interesting move by Intel, I like it! I wonder if they are thinking that with the whole industry drifting towards ARM, RISC-V will be the next stop on the road to CPU unification and commoditization and they are trying to jump directly into the future.
 

Karl S

Member
The majority of SiFive’s revenue is Openfive which was formerly OpenSilicon the ASIC company. That is probably the jewel in the crown.
And it is a graceful way to move away from the multi-core, super scalar out of order execution overly complex x86 into heterogeneous computing just as Apple is doing. BUT Microsoft Research "Where's the Beef?" found that FPGAs were the way to go because there is no instruction fetch from memory. That means that Risc V which is a load/store architecture is not the answer.

The remaining problem is that FPGA design is very hard to do. (That is if it is done with traditional HDL tool flow)

There is a simpler way to design FPGAs using the Roslyn Compiler API to personalize a simple FPGA design. (actually making the FPGA "programmable")

The problem now becomes how to convince designers that there actually is a simpler way, and that RISC V is not the magic bullet.
 

count

Active member
I think RISC-V vs ARM is going to be an important battle in the 2030s. I think it's a smart move on Intel's part, but it's a long game they are playing and it's not something that is likely to move the needle for a decade. But it is good to see the company thinking ahead for a change.
 

Karl S

Member
I think RISC-V vs ARM is going to be an important battle in the 2030s. I think it's a smart move on Intel's part, but it's a long game they are playing and it's not something that is likely to move the needle for a decade. But it is good to see the company thinking ahead for a change.
So what can we expect for the next 10 years?

I think that RISC-V is doing the same thing again (load/store) and expecting different results(which is a symptom of insanity) just because it is "free". It is based on assembler level programming, but practically no one programs in assembler. After all Intel started with the 8080 which was about as RISC-y as it could be.

ARM doesn't have all the answers, either. So what is the next step for ARM?

In fact most programming is done in languages that are compiled at a more abstract level than C, and it seems that RISC-V has only an Assembler and a C compiler(maybe in the works).

Intel should develop easy to use design tools for heterogeneous FPGA applications. Seems that Apple dumped Intel and the M1 chip looks to be heterogeneous -- BUT not ARM.

C# has everything to build the blocks(Classes) that have identical functional behavior as Verilog modules. And the VS IDE has the build and debug tools for free.

Intel should focus on heterogeneous FPGA design. Longer pipelines, multicore, out of order, etc. are not the answers. Neither is Load/Store because the performance is limited by memory, primarily access time, since day one.

Maybe ARM will take on heterogeneous design.

But did Apple get there first?
 
There are many opinons on this. I think the future is "multi-core
super scalar out of order execution overly complex x86". X86_64
still provides the most algorithm computing power per power usage.
X6_64 turn off units power saving is only behind because there
hasn't been X86_64 competition recently to drive fast switching
semiconductor technology. I see future process quality as not being
low powerness. Every (there must be some exceptions) interesting
computing application has steps that do not parallelize. I think you
posters are assuming computation is illiterates watching
videos and social networking on their cell phones. One example
is molecular applications that are needed for medicine deelopment.
 

count

Active member
So what can we expect for the next 10 years?

I think that RISC-V is doing the same thing again (load/store) and expecting different results(which is a symptom of insanity) just because it is "free". It is based on assembler level programming, but practically no one programs in assembler. After all Intel started with the 8080 which was about as RISC-y as it could be.

ARM doesn't have all the answers, either. So what is the next step for ARM?

In fact most programming is done in languages that are compiled at a more abstract level than C, and it seems that RISC-V has only an Assembler and a C compiler(maybe in the works).

Intel should develop easy to use design tools for heterogeneous FPGA applications. Seems that Apple dumped Intel and the M1 chip looks to be heterogeneous -- BUT not ARM.

C# has everything to build the blocks(Classes) that have identical functional behavior as Verilog modules. And the VS IDE has the build and debug tools for free.

Intel should focus on heterogeneous FPGA design. Longer pipelines, multicore, out of order, etc. are not the answers. Neither is Load/Store because the performance is limited by memory, primarily access time, since day one.

Maybe ARM will take on heterogeneous design.

But did Apple get there first?
Maybe me saying it's a 2030s battle isn't really correct.

I expect ARM to dominate the next 10 years at the very least. ARM is increasingly moving up the value chain from mobile and embedded to PCs and servers in a way that mirrors high Intel moved from PCs to servers in the 1990s.

As far has heterogenous, that's a fabless vs IDM battle almost by definition. ARM can help better enable heterogenous, to get itself designed into heterogenous architectures, but it's not ARM that's building the chips, it's licensing the IP for one part of the chip that fabless companies are designing for their varied use cases.

Only with a lot of investment can RISC-V even hope to compete with ARMs ecosystem, and even then it'll take 10 years, but those investments need to be made now for that to happen - just like investments into the ARM ecosystem in the early 2000s is what paved the way for ARM to become the powerhouse it is today.
 

Karl S

Member
There are many opinons on this. I think the future is "multi-core
super scalar out of order execution overly complex x86". X86_64
still provides the most algorithm computing power per power usage.
X6_64 turn off units power saving is only behind because there
hasn't been X86_64 competition recently to drive fast switching
semiconductor technology. I see future process quality as not being
low powerness. Every (there must be some exceptions) interesting
computing application has steps that do not parallelize. I think you
posters are assuming computation is illiterates watching
videos and social networking on their cell phones. One example
is molecular applications that are needed for medicine deelopment.
I wish there was a way to get past "opinions" with some realistic analysis. Multi-core was sold as "pie in the sky" -- like double the performance for free. But no one could do the necessary parallel programming. Out of order execution and cache were invented for matrix inversion, but mainly justified on intuitive appeal rather than realistic test cases. Where are the practical evaluations that can be used to measure out of order execution? While we are at it let's find the test cases that measure the impact of pipeline latency on overall performance.

This thought/assumption "I think you posters are assuming computation is illiterates watching
videos and social networking on their cell phones." is offensive.

Computation is evaluating mathematical and logical expressions. The logical evaluation determines if/which arithmetic expression to evaluate.

There is operator precedence that determines the sequence of evaluation for both expressions.

Both cache and out-of-order execution were conceived before or when compilers were just being developed. Sadly no one looks back and questions how effective they are.

Compilers allocate memory on the stack or heap. So computers must load operands and instructions from memory/cache. First fetch the instruction and fetch at least one operand after calculating the memory address. Does X86_64 do something different?

The procedural flow that fetches both operands and instructions for if statements loads 2 operands, a compare instruction, a branch instruction, and finally the next 2 operands, and the next instruction is the primary performance limiter because of pipeline latency and memory access time. This agrees with the "Where's the Beef" observation that FPGAs win because of instruction fetches.
 
You people need to learn some history. In the 1950s probably the smartest
person of the 20th century John von Neumann rejected his 1930s
quantum logic and developed the von Neumann architecture. The model
he used is called MRAM (random memory access with multiply
and indexing). His papers showed that neural network type parallelism
is inefficient. The theoretical basis for this was analyzed by Hartmanis
and Simon. The analysis shows Turing Machines with the need for
parallel tapes are slow because lack of indexing, registers and RAM.
For Neumann's MRAM model P is equal to NP so there is no need
for guessing. Here is my historical preprint on von Neumanns 1950s
thinking. "John von Neumann's 1950s Change to Philosopher of
Computation". URL: https://arxiv.org/abs/2009.14022

Von Neumann showed complex instruction set CPUs such as
X86_64 are must faster than RISCs and especially ARMs.
 

Karl S

Member
You people need to learn some history. In the 1950s probably the smartest
person of the 20th century John von Neumann rejected his 1930s
quantum logic and developed the von Neumann architecture. The model
he used is called MRAM (random memory access with multiply
and indexing). His papers showed that neural network type parallelism
is inefficient. The theoretical basis for this was analyzed by Hartmanis
and Simon. The analysis shows Turing Machines with the need for
parallel tapes are slow because lack of indexing, registers and RAM.
For Neumann's MRAM model P is equal to NP so there is no need
for guessing. Here is my historical preprint on von Neumanns 1950s
thinking. "John von Neumann's 1950s Change to Philosopher of
Computation". URL: https://arxiv.org/abs/2009.14022

Von Neumann showed complex instruction set CPUs such as
X86_64 are must faster than RISCs and especially ARMs.
So he could also "see" into the future because the CISC/RISC concepts did not exist at that time.

It so happens that I began working in computer systems in 1957, installing, trouble shooting, modifying, and getting into the inside of computers to see exactly how things worked or didn't work. The first system/computer was ANFSQ7, a descendent of MIT project Whirlwind.

Also I was working in IBM systems development when cache was invented for the System360 Model 85 also when John Cocke and George Radin were tauting the 801 (RISC) architecture.

If you Google MIT project whirlwind, you will find some interesting history about computers that I saw first hand. Any more suggestions about what I should do? That is except go to Hell?
 
In the late 1940s and early 1950s von Neumann was creating
his architecture. He understood the importance of random
access memory accessed into registers by instructions with
as many addressing modes as could be encoded in instructions.
The story is told in William Aspray's excellent book.
"John von Neumann and the Origins of Modern Computing."





"
 

kvas

New member
You people need to learn some history. In the 1950s probably the smartest
person of the 20th century John von Neumann rejected his 1930s
quantum logic and developed the von Neumann architecture. The model
he used is called MRAM (random memory access with multiply
and indexing). His papers showed that neural network type parallelism
is inefficient. The theoretical basis for this was analyzed by Hartmanis
and Simon. The analysis shows Turing Machines with the need for
parallel tapes are slow because lack of indexing, registers and RAM.
For Neumann's MRAM model P is equal to NP so there is no need
for guessing. Here is my historical preprint on von Neumanns 1950s
thinking. "John von Neumann's 1950s Change to Philosopher of
Computation". URL: https://arxiv.org/abs/2009.14022

Von Neumann showed complex instruction set CPUs such as
X86_64 are must faster than RISCs and especially ARMs.
So von Neumann preferred random access memory (like in VN architecture) to Turing machines on efficiency grounds -- fair enough. Still, I don't see how this maps to RISC vs CISC debate: both of those work with random access memory. What am I missing?
 
Top