Emulation Webinar SemiWiki

WP_Query Object
(
    [query] => Array
        (
        )

    [query_vars] => Array
        (
            [error] => 
            [m] => 
            [p] => 0
            [post_parent] => 
            [subpost] => 
            [subpost_id] => 
            [attachment] => 
            [attachment_id] => 0
            [name] => 
            [pagename] => 
            [page_id] => 0
            [second] => 
            [minute] => 
            [hour] => 
            [day] => 0
            [monthnum] => 0
            [year] => 0
            [w] => 0
            [category_name] => 
            [tag] => 
            [cat] => 
            [tag_id] => 
            [author] => 
            [author_name] => 
            [feed] => 
            [tb] => 
            [paged] => 0
            [meta_key] => 
            [meta_value] => 
            [preview] => 
            [s] => 
            [sentence] => 
            [title] => 
            [fields] => 
            [menu_order] => 
            [embed] => 
            [category__in] => Array
                (
                )

            [category__not_in] => Array
                (
                )

            [category__and] => Array
                (
                )

            [post__in] => Array
                (
                )

            [post__not_in] => Array
                (
                )

            [post_name__in] => Array
                (
                )

            [tag__in] => Array
                (
                )

            [tag__not_in] => Array
                (
                )

            [tag__and] => Array
                (
                )

            [tag_slug__in] => Array
                (
                )

            [tag_slug__and] => Array
                (
                )

            [post_parent__in] => Array
                (
                )

            [post_parent__not_in] => Array
                (
                )

            [author__in] => Array
                (
                )

            [author__not_in] => Array
                (
                )

            [ignore_sticky_posts] => 
            [suppress_filters] => 
            [cache_results] => 
            [update_post_term_cache] => 1
            [lazy_load_term_meta] => 1
            [update_post_meta_cache] => 1
            [post_type] => 
            [posts_per_page] => 10
            [nopaging] => 
            [comments_per_page] => 50
            [no_found_rows] => 
            [order] => DESC
        )

    [tax_query] => WP_Tax_Query Object
        (
            [queries] => Array
                (
                )

            [relation] => AND
            [table_aliases:protected] => Array
                (
                )

            [queried_terms] => Array
                (
                )

            [primary_table] => wp5_posts
            [primary_id_column] => ID
        )

    [meta_query] => WP_Meta_Query Object
        (
            [queries] => Array
                (
                )

            [relation] => 
            [meta_table] => 
            [meta_id_column] => 
            [primary_table] => 
            [primary_id_column] => 
            [table_aliases:protected] => Array
                (
                )

            [clauses:protected] => Array
                (
                )

            [has_or_relation:protected] => 
        )

    [date_query] => 
    [queried_object] => 
    [queried_object_id] => 
    [request] => SELECT SQL_CALC_FOUND_ROWS  wp5_posts.ID FROM wp5_posts  WHERE 1=1  AND wp5_posts.post_type = 'post' AND (wp5_posts.post_status = 'publish' OR wp5_posts.post_status = 'expired' OR wp5_posts.post_status = 'tribe-ea-success' OR wp5_posts.post_status = 'tribe-ea-failed' OR wp5_posts.post_status = 'tribe-ea-schedule' OR wp5_posts.post_status = 'tribe-ea-pending' OR wp5_posts.post_status = 'tribe-ea-draft')  ORDER BY wp5_posts.post_date DESC LIMIT 0, 10
    [posts] => Array
        (
            [0] => WP_Post Object
                (
                    [ID] => 289069
                    [post_author] => 16
                    [post_date] => 2020-08-04 14:00:20
                    [post_date_gmt] => 2020-08-04 21:00:20
                    [post_content] => What do you do next when you've already introduced an all-in-one extreme edge device, supporting AI and capable of running at ultra-low power, even harvested power? You add a software flow to support solution development and connectivity to the major clouds. For Eta Compute, their TENSAI flow.

The vision of a trillion IoT devices only works if the great majority of those devices can operate at ultra-low power, even harvested power. Any higher, and the added power generation burden and field maintenance make the economics of the whole enterprise questionable. Alternatively, reduce the number of devices we expect to need and the economics of supplying these devices looks shaky. The vision depends on devices that are be close to self-sufficient in power.

Adding to the challenge, we increasingly need AI at the extreme edge. This is in part to manage the sensors, to detect locally and communicate only when needed. When we do most of what we need locally, there's no need to worry about privacy and security. Further, we often need to provide real-time response without the latency of a roundtrip to a gateway or the cloud. And operating expenses go up when we must leverage network or cloud operator services (such as AI).

All-in-One Extreme Edge

All-in-one extreme edge

Eta Compute has already been leading the charge on ultra-low power (ULP) compute. They do this building on their proprietary self-timed logic and continuous voltage and frequency scaling technology. In partnerships with Arm for Cortex-M and NXP for CoolDSP, they already have established a multi-core IP platform for ULP AI at the extreme edge. This runs typically below 1mW when operating and below 1uA in standby. It can handle a wide range of use cases – image, voice and gesture recognition, sensing and sensor fusion among other applications. They can run any neural networks and support the flexible quantization now commonly seen in many inference applications.

TENSAI Flow

Semir Haddad (Sr Dir Product Marketing) told me that Eta Compute’s next step along this path is to provide a full-featured software solution to complement their hardware. This is designed to maximize ease of adoption, requiring little to no embedded programming. They will supply reference designs in support of this goal. The software flow (in development) is called TENSAI. Networks are developed and trained in the cloud in the usual way, eg through TensorFlow, reduced then through TensorFlowLite. The TENSAI compiler takes the handoff to optimize the network to the embedded Eta Compute platform. It also provides all the middleware: the AI Kernel, FreeRTOS, hardware abstraction layer and sensor drivers. With the goal, as I said before, that not a single line of new embedded code should be needed to bring up a reference design.

Azure, AWS, Google cloud support

Data collection connects back to the cloud through a partnership with Edge Impulse (who I mentioned in an earlier blog). They support cloud connections with all the standard clouds – Azure, AWS and Google Cloud (he said they see a lot of activity on Azure). Semir stressed there is opportunity here to update training for edge devices, harvesting data from the edge to improve accuracy of abnormality detection for example. I asked how this would work, since sending a lot of data back from the edge would kill power efficiency. He told me that this would be more common in an early pilot phase, when you’re refining training and not so worried about power. Makes sense. Semir also said that their goal is to provide a platform which is as close to turnkey as possible, except for the AI training data. They even provide example trained networks in the NN Zoo. I doubt they could make this much easier. TENSAI flow is now available. Check HERE for more details. [post_title] => All-In-One Extreme Edge with Full Software Flow [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => all-in-one-extreme-edge-gets-tensai-software-support [to_ping] => [pinged] => [post_modified] => 2020-08-04 12:51:57 [post_modified_gmt] => 2020-08-04 19:51:57 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=289069 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [1] => WP_Post Object ( [ID] => 288841 [post_author] => 11830 [post_date] => 2020-08-04 10:00:18 [post_date_gmt] => 2020-08-04 17:00:18 [post_content] =>

Attack vectors and EDA countermeasures

One of the Designer Track at this year’s DAC focused on the popular topic of automotive electronics.  The title was particularly on-point, The Modern Automobile: A Safety and Security “Hot Zone”. The session was chaired by Debdeep Mukhopadhyay, a Professor at the Indian Institute of Technology in Kharagpur.

This special, invited session can be summarized as follows:

The advent of the Automotive 2.0 era has driven increased integration of electronics and networking into the conventional automobile. The modern electrified automobile can be simply viewed as a connected, embedded system on wheels. Not surprisingly, safety and security concerns are coming increasingly to the forefront. This special session will focus on answers to multiple questions related to automotive safety and security - what are the issues at the system level, what are the standards available today, how do safety and security co-exist (or collide!), and what does it mean to build and verify security in our chips.

Presenters included:

  • Chuck Brokish, Director of Transportation Business Development, Green Hills Software LLC
  • David Foley, Semiconductor Architect, Texas Instruments, Inc.
  • Steve Carlson, Director, Cadence Design Systems

While all the presentations were relevant and on-point, I’d like to focus on the presentation from Steve Carlson at Cadence. I’ve known Steve for a long time. His career began at LSI Logic, arguably the birthplace of ASIC. Steve began by pointing out the magnitude of the cyber security problem. The attacks are ubiquitous, everything from shipping vessels to pacemakers. Governments are getting involved and we can expect lots more compliance requirements.

If one looks at the attack vectors for this problem, a lot of it is at the hardware or hardware/software interface level. So, EDA should be able to help. Said another way, after all the time and money invested in software security, it’s time for hardware to take center stage.

Steve pointed out that we’ve seen a lot of work in the functional safety area regarding standards compliance and certification. These techniques will transfer now to the security domain. Steve talked about pre-silicon attack verification – basically a way to validate the robustness of security layers with simulated attacks on the design before tapeout. Formal methods hold great promise for this activity as they are not dependent on input vectors and the associated “blind spots” they can bring. More on this in a moment.

A comprehensive overview of the various attack vectors and the countermeasures EDA offers was presented. This diagram really drove home the breadth of the problem. It’s included at the top of this post. Rather than spend an hour on this chart (DAC presentations are short) Steve chose to focus on formal methods. It turns out there are a number of specialized formal security applications that can prove things like data integrity, so this is a promising approach to verify compliance. The breadth of this technology is summarized in the diagram below.

Functional and Sercurity Formal Analysis

Steve ended his discussion with a vision of top-down verification of hardware security. Similar to approaches used for early hardware/software verification, he advocated a top-down approach to model the entire system in package, including the chip, interposer, package and board. This will allow the development of attack tests at a high level that can be used later in the design flow to verify the robustness of the system.

There are industry-level efforts to advance the cause as well. Cadence is working with several organizations to advance the state of testing and compliance, including Accellera. Security is a daunting task; it was good to hear about some positive momentum from Cadence.

If you have a DAC pass, I encourage you to watch this entire designer track session. I believe the material will be available online for an extended period of time.  You can find this session on The Modern Automobile: A Safety and Security “Hot Zone” here.

 

[post_title] => Cadence on Automotive Safety: Without Security, There is no Safety [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => cadence-on-automotive-safety-without-security-there-is-no-safety [to_ping] => [pinged] => [post_modified] => 2020-08-02 11:19:22 [post_modified_gmt] => 2020-08-02 18:19:22 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=288841 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [2] => WP_Post Object ( [ID] => 289060 [post_author] => 13840 [post_date] => 2020-08-04 06:00:42 [post_date_gmt] => 2020-08-04 13:00:42 [post_content] => A lot has been written and even more spoken about artificial intelligence (AI) and its uses. Case in point, the use of AI to make autonomous vehicles (AV) a reality. But, surprisingly, not much is discussed on pre-processing the inputs feeding AI algorithms. Understanding how input signals are generated, pre-processed and used by AI algorithms ultimately leads to the need to tightly combine advanced digital signal processing (DSP) with AI processing. [caption id="attachment_289109" align="alignright" width="300"]VSORA MPU for DSP and All Processing. Source: VSORA VSORA MPU for DSP and All Processing. Source: VSORA[/caption] Today’s AI computing units – CPUs, GPUs, FPGAs, ASICs, Hardware Accelerators, etc – focus on the execution of the algorithms, overlooking input signals management, which, perhaps, explains why the issue has never been raised before. Until now, that is. VSORA devised a compact and efficient approach combining advanced digital signal processing (DSP) with AI algorithmic acceleration, on the same silicon, exchanging data via on-chip large memory, setting a new standard for performance, power consumption, efficiency, area and cost. See figure 1.

Fundamentals of Autonomous Vehicles

To understand the issue, let’s consider a key AI application: autonomous vehicles. The brain or controller of a self-driving car operates on a three-stage loop: Perception, Planning, and Action. See figure 2. [caption id="attachment_289110" align="alignnone" width="525"]Figure 2. Autonomous vehicle three stage loop controller: Source Shutterstock Figure 2. Autonomous vehicle three stage loop controller: Source Shutterstock[/caption]

Perception

In the Perception stage, the controller learns the environmental characteristics of the vehicle surroundings. This is accomplished by collecting a variety of data produced by a range of AV sensors, outside the scope of ordinary sensors monitoring the car’s status, such as oil and water temperature, oil and tire pressure, battery charge, light bulb functionality, and the like. The AV sensors encompass a combination of different types of cameras, radars, lidars, ultrasonic devices, etc.. Actual type and quantity depend on vehicle manufacturers. For example, Tesla elected not to use lidar sensors. The data generated by these sensors is processed via compute-intensive DSP algorithms to extract accurate and vital information to ensure safe AV driving. The higher the level of autonomy of the vehicle, the more the vehicle relies on the accuracy and timeliness of what the sensors provide.

Autonomous Vehicle Sensors

AV sensors can be grouped in two broad classes: “cost-effective” and “high-performance.” Both types of sensors capture data from the vehicle’s environment and elaborate it in-situ via pre-defined algorithmic processing before sending them to the controller. The difference is in the handling of pre-processed data. In cost-effective sensors, pre-processed data is further run through local algorithms, for instance, tracking that generates lists of tracked objects dispatched upstream to the controller. For example, a sensor, be it radar, lidar or camera, may detect a series of objects in front of the car and then run them through a local image classification algorithm in an attempt to identify a pedestrian about to cross a road. In high-performance sensors, the pre-processed data from all sensors is fed straight through to the controller where it is run through a series of algorithms. For example, fusing this data with data captured from other sensors for the same object, or clustering that combines multiple detections into bigger units, or distance transforms, or particle filter, typically some type of Kalman filter. While the data is unique to each sensor, it corresponds to objects that can be represented by vectors in the real world (x,y location, distance, direction of travel, speed, etc.). Once ensured that all the vectors from all sensors are aligned and use the same reference frame, each vector is positioned on an x-y grid in a map. The 2D map populated with the vectors encompass the vehicle environment and can be described using a 2D matrix. See figure 3. [caption id="attachment_289111" align="alignnone" width="2355"]Figure 3: Cost-optimized vs High-performance AV Sensors. Source: VSORA Figure 3: Cost-optimized vs High-performance AV Sensors. Source: VSORA[/caption] The complex processing generates tracking information. To prevent false information, the controller may track many more objects than what is being presented, and through a decision process resolve to track-and-show the object or continue tracking the object or to delete it from further consideration. An example of the input information to the first stages could be the 3D lidar cloud and the 2D and/or 3D camera images. The two types of sensors lead to significant differences in the system requirements. The local algorithms in cost-effective sensors reduce the computing power of the controller and the data-transfer bandwidth of the communication with the controller. However, the advantages trade off accuracy because of imperfections in the sensors. In poor light conditions, or bad weather, sensors may generate incomplete and ambiguous data that may cause serious problems. Missing a pedestrian crossing a road in a blizzard because the camera failed to identify the pedestrian may have dramatic consequences. In high-performance sensors, the amount of data traffic between sensors and control unit is significantly higher, demanding larger bandwidth and by far more computing power in the controller to combine and process data from several sensors in the time frame available. The upside is more accurate decisions since they are based on all sensor data. The bottom line is that the Perception stage in the AV control unit is greatly dependent on powerful DSP, heavy data traffic and intense memory accesses.

Challenges

The IEEE article titled “Fusion of LiDAR and Camera Sensor Data for Environment Sensing in Driverless Vehicles“ states that “heterogeneous sensors simultaneously capture various physical attributes of the environment.” However, “these multimodal sensor data streams are different from each other in several ways, such as temporal and spatial resolution, data format, and geometric alignment. For the subsequent Perception algorithms to utilize the diversity offered by multimodal sensing, the data streams need to be spatially, geometrically and temporally aligned with each other.” See figure 4. [caption id="attachment_289112" align="alignnone" width="525"]Figure 4: Environmental Perception by combining sensor raw data. Source: VSORA Figure 4: Environmental Perception by combining sensor raw data. Source: VSORA[/caption] The requisites pose a series of challenges, such as, how to create a geometrical model to align the sensor outputs, how to process them to interpolate missing data with quantifiable uncertainty, how to fuse distance data; how to combine 3D point cloud from a Lidar with luminance data from a camera, and more. To make an autonomous vehicle reliable, accurate and safe, it is imperative to solve the above challenges. As discussed above, collective AV sensors data is typically combined into an occupancy map that stores information of relevant individual objects. Clustering identifies objects in an occupancy map, and the addition of distance transforms to clustering increases the accuracy of tracking and of the entire system. See figure 5. [caption id="attachment_289113" align="alignnone" width="525"]Figure 5. Occupancy grid. Source: IEEE Article referenced above Figure 5. Occupancy grid. Source: IEEE Article referenced above[/caption] When tracking objects, it is crucial to know where they are at any given time, and to predict where they may be in the near future. While implementing prediction is relatively simple, the problem explodes in complexity as the number of objects to be tracked increases. The issue gets aggravated when objects disappear and reappear for various reasons. For instance, when tracking two objects moving in different direction, and suddenly one object overlays and hides the other. Distance transform improves algorithmic decisions by identifying distances between objects, and thereby helps overcome or reduce sensor induced errors and anomalies. Clustering also helps to reduce the decision trees. The probability to have an accurate information on an object increases substantially with 300 parallel pings on it vs. a single ping. The same also helps to deal with starting and ending tracking of a real or false object. An adaptive particle filter, typically based on Kalman filter, may be used for tracking framework. For example, implementation of a recursive Bayesian estimation algorithm can handle non-linear and non-Gaussian state estimation problems. As important, low latency in the communication between the Perception and the Decision stage is essential for accuracy and reliability. Just to exemplify, at a speed of 240 km/h, a vehicle would cover a distance of 67 meters in each second demanding system responses much faster than 1 second/iteration to avoid catastrophic outcomes. The above considerations highlight the complexity of the task and the conspicuous computing power required to confront it. Only an advanced DSP implementation in an ad-hoc design architecture can solve the issues.

Planning / Decision

The Perception stage is followed by the Planning or Decision stage that establishes a collision-free and safe path to the vehicle destination. The objective is achieved by combining risk assessment or situation understanding with mission and route planning. These tasks require mustering vehicle dynamics, traffic rules, road boundaries and potential obstacles. The traditional procedure for the Planning stage progresses through 4 steps. It starts with route planning that searches for the best route from the origin to the destination. Traffic information generated through C-V2X inputs may be included in this stage. The second step determines the geometric trace the vehicle should drive on to reach the destination following set boundaries (road / lane) and traffic rules, whilst avoiding obstacles. The third step deals with manoeuvre choice. Based on vehicle position and speed, manoeuvre comes up with the best vehicle actions to realize the specified path identified in step 2. As an example, best action among “turn right”, “go straight”, “change lane to the left”, etc. Finally, the fourth step deals with trajectory planning, i.e., the vehicle’s actual transition from one state to the next in real-time. It involves vehicle constraints and navigation rules (lane/road boundaries, traffic rules, movement, …) while at the same time avoiding obstacles on the planned path (other vehicles, road conditions,...). Since trajectory planning is both time and velocity dependent, it can be regarded as the actual motion planning for the vehicle. During this time, the system evaluates errors between actual location and planned trajectory to revise the trajectory plan if needed. The bottom line is that the planning stage in the AV control unit is greatly dependent on powerful AI processing and intense memory accesses.

Action: Execution/Control

The final stage in the autonomous vehicle controller is the Action stage. This stage implements the trajectory plan computed by the planning stage. For example, activating a turn-signal, moving to an exit lane, and turning off the present road. As the actions get executed, the environmental situation changes, forcing the entire process to re-start from the Perception stage.

VSORA

VSORA, a startup with decades of experience in creating DSP designs for the wireless and communication industry, conceived a re-programmable, scalable, software-driven multi-core DSP/AI solution that ensures a cost-effective development. At a high level, the VSORA architecture is similar to the DSP architecture described in figure 6. [caption id="attachment_289115" align="alignnone" width="638"]Figure 6: VSORA MPU high-level view Figure 6: VSORA MPU high-level view[/caption] Called Matrix Processing Unit (MPU) for handling multi-dimensional matrices, the device can be configured with a variable number of cores. Each core is also configurable by defining a set of parameters. For DSP applications, the number of ALUs can be programmed in multiples of 8, up to 1,024 per core. For AI applications, the number of MACs can be programmed from 256 to 65,536 per core. The cores can further be programmed in terms of quantization, on-chip memory sizes, etc. The architectures of the two types of MPU are similar but not identical.

VSORA: Combining Signal Processing and AI

As discussed above, the design of a controller managing autonomous vehicles relays heavily on a variety of algorithms for signal processing and for AI, both requiring high level of performance, while keeping power consumption and cost to the minimum. To compound the issue, these algorithms are going through continuous refining and updating making a solution casted in hardware unacceptable. The VSORA solution is unique in that the same hardware can easily handle both the Perception and the Planning stages of the controller loop as discussed above. [caption id="attachment_289116" align="alignright" width="300"]Figure 7: VSORA MPU for Perception & Planning Processing Figure 7: VSORA MPU for Perception & Planning Processing[/caption] Specifically, the Perception stage could be mapped on an optimized DSP MPU, and the Planning stage on a second MPU configured to accelerate AI algorithms. The two MPUs share an on-chip large memory, with the DSP MPU writing the results into the memory and the AI MPU reading those results out of the memory, in sequence, preventing memory conflicts. See figure 7. The setup eliminates the performance bottleneck associated with external memories caused by restricted data-transfer bandwidth. It also reduces latency and power consumption by drastically shortening the data path to/from memory. The actual implementation of the entire system fits in a small footprint, consumes low power, provides high performance, and is remarkably efficient. The VSORA architecture is ideal to serve the AI/ADAS industry.

Conclusions

Early implementations of a VSORA system consisting of 512 ALUs and 16kMACs with 32MWord of memory, based on 7 nm process technology node, fitted in a silicon area of approximately 25 sqmm. Running at 2GHz, the ALU MPU executed about 1T MAC/second, and the AI MPU delivered a performance of 65 TOPs. The results meet the challenges posed by today’s autonomous vehicle. The architecture is scalable to the extent that by doubling the area to 50 sqmm, it would be capable to handle the requirements for the next generations of autonomous vehicles. Special thanks to co-author Jan Pantzar, VSORA Sales VP. [post_title] => Combining AI and Advanced Signal Processing on the Same Device [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => combining-ai-and-advanced-signal-processing-on-the-same-device [to_ping] => [pinged] => [post_modified] => 2020-08-04 20:53:53 [post_modified_gmt] => 2020-08-05 03:53:53 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=289060 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [3] => WP_Post Object ( [ID] => 289103 [post_author] => 13 [post_date] => 2020-08-03 10:00:20 [post_date_gmt] => 2020-08-03 17:00:20 [post_content] => Every chip needs ESD protection, especially RF, analog and nm designs. Because each type of design has specific needs relating to IOs, pad rings, operating voltage, process, etc. it is important that the ESD protection network is carefully tailored to the design. Also because of interactions between the design and its ESD protection network, this work cannot wait to the end of the design cycle and just get slapped on. At each stage in the design process, from library definition and modeling, circuit design and layout, to verification, the requirements of ESD protection need be considered. Magwel Webinar Designing and Verifying HBM ESD Protection Networks Because of the level expertise required a lot of design teams look for a comprehensive solution from companies that specialize in ESD design. The solution provided could consist everything from cell design to verification of the completed design for ESD. Sofics and Magwel have teamed up to work with customers, together offering everything that is needed to implement and verify a comprehensive ESD solution. Sofics has deep experience on the design side and Magwel develops ESD discharge event verification tools. Partnering with Sofics and Magwel brings a lot to the table. Sofics and Magwel will be hosting a joint webinar to discuss the complete solution they offer by working together with customers. In this webinar Sofics and Magwel will discuss the areas they each focus on and the touch points between them, and with the customer and  foundry. There will a presentation from each company that goes into detail on their portion of the solution. This will be followed by a Q&A. Sofics has a deep portfolio of working with processes from leading and specialty foundries. As a result, they have learned how to optimize many important performance criteria. For low power, they offer cells with extremely low leakage. Where RF and high-speed performance is important they minimize capacitance and resistance. At the same time their ESD protection structures protect against multiple types of ESD failures. In the webinar Sofics will discuss these and other specific techniques they use. Magwel is working with Sofics so that their ESDi tool works efficiently with the ESD devices and process parameters for each design. ESDi can verify that the ESD protection network is going to work correctly during ESD discharge events. Using jointly prepared device models and technology files, Magwel's ESDi can simulate every ESD discharge event, including voltage drop and current levels for each device and net. The webinar will include an overview of Magwel’s ESDi including its analysis capabilities, violation reporting and debugging features. The webinar is scheduled for August 11th 10AM PST. You can sign up for the webinar online here. [post_title] => Webinar - Designing and Verifying HBM ESD Protection Networks, August 11, 10AM PST [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => webinar-designing-and-verifying-hbm-esd-protection-networks-august-11-10am-pst [to_ping] => [pinged] => [post_modified] => 2020-08-04 20:58:46 [post_modified_gmt] => 2020-08-05 03:58:46 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=289103 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [4] => WP_Post Object ( [ID] => 289072 [post_author] => 19385 [post_date] => 2020-08-03 06:00:55 [post_date_gmt] => 2020-08-03 13:00:55 [post_content] =>
It is now time for the EUV community to realize they are caught between the proverbial Scylla and Charybdis. In Greek mythology, the two monsters terrorized ships that were unlucky enough to pass between them. By avoiding one, you approached the other.

S for Scylla, or Stochastics

Scylla was a former beautiful nymph turned into a frightful monster with six dogs forming her lower body [1]. The dogs would devour passers-by. In the same way, photons are devoured under situations where stochastic effects (due to photon shot noise) are aggravated. Table 1 lists the aggravating factors for EUV.
Table of EUV Stochastic Aggravators
Table 1. Factors aggravating stochastic effects for EUV. The listed number indicates how much the dose must be multiplied to account for photon shot noise. References cited in text. The number of diffraction order combinations has always been taken for granted with conventional lithography. For an assumed 2D cell pitch p and wavelength l and numerical aperture NA, as pi * NA^2/ (l/p)^2, each shift of l/p within the pupil forces a change in the diffraction order combination. Referring to the figure below, for different line end widths, the photon shot noise would be 6s=9.5%, per diffraction pattern, by having 4000 photons in the line end's semicircular process variation (P-V) band (width = 10% line width). Assuming the 2D cell pitch to be 10x larger, the dose required to fulfill this condition on various lithography tools is calculated.
EUV faces Scylla and Charybdis
Figure 1. Estimates of the number of diffraction order combinations and the consequent shot noise impact for different lithography systems. Since the P-V band is proportional to the square of the feature size, the smaller feature size worsens the shot noise. However the shorter wavelength, larger NA and larger cell pitch also aggravate shot noise by increasing the number of diffraction order combinations making up the final image. The easiest way to combat this effect is to reduce the pupil fill [2]. This has already happened for immersion lithography since it is already operating near the resolution limit, while on the other hand, EUV lithography users are tempted to use higher pupil fill with conventional or annular illumination. Thus, ASML had to provide users with low pupil fill options with the recommendation to use them [3,4]. Other fundamental considerations, when reviewed, also can be seen to worsen the stochastic behavior. Defocus [5] affects all lithography systems, but worsens the already severe situation for EUV. It is generally expected to be minimized by optimized illumination. EUV also has some unique traits, such as being a band of wavelengths with a few % bandwidth [6]; photons for each wavelength are independent from others, and behave slightly differently in the imaging due to the different reflectivities for each at each optical element. Likewise, different angles of incidence and reflection also lead to different reflectivities [6]; as this occurs at the object (the mask) this has observable effects such as image shifts [3]. All told, for a full pupil fill the EUV dose can be multiplied hundreds, even thousands of times. Thus, pupil fill reduction is expected to help rein this in, but even in the best illumination case, a dose increase of several times due to the wavelength bandwidth cannot be avoided. A minimal pupil fill minimizes the stochastic effects, but it consequently brings EUV closer to the other deadly monster.

C for Charybdis, or Central-Ray Rotation

Charybdis, the other monster, was basically a whirlpool that sucked things in [7]. The use of low pupil fill for EUV inevitably leads to facing the rotating central ray and plane of incidence [8] across the exposure field, or "slit." This is a necessary consequence of enforcing uniform intensity across the exposure field. If the plane of incidence were required to be fixed across slit, the rays must be in the same direction across slit, after being collimated by the previous mirror. What needs to happen is the incident ray at one position needs to be reflected from the previous mirror at a different angle (with respect to its local surface) than for another position (Figure 2). The angular dependence of reflectance or reflectivity will cause the two positions to have two different intensities. Furthermore, the originally targeted rotational symmetry of the optical design (around the optical axis) would be broken. The only way to maintain consistency reflected intensity and rotational design symmetry would be to have the plane of incidence rotate across slit.
No alt text provided for this image
Figure 2. To change the plane of incidence, the surface itself must be rotated. This results in a change of reflectance or reflectivity, preventing illumination uniformity from being maintained across the exposure field. For a low pupil fill, the illumination directions are constrained. When the plane of incidence is rotated, these directions are rotated as well, away from their originally targeted orientations relative to the features to be imaged. SK Hynix along with KLA report the NXE:3400 to have +/- 18.2 degrees of azimuthal rotation across slit [9], yet the proposed leaf hexapole range is about 15 degrees. For QUASAR illumination optimized for 40 nm pitch holes, the tolerance is +/- 15 degrees [10].

Steering Clear

In Homer's epic story The Odyssey [11], Odysseus his crew were able to avoid being completely killed by Scylla and Charybdis once, losing 6 men to Scylla. However, due to subsequent crimes by the crew, they were punished by being driven toward Charybdis on the return, with only Odysseus surviving by clinging to a tree. EUV’s similar story is a brush with the stochastic beast, which forces an approach toward rotating out of control, from which the only escape would be to reduce the field width. It's definitely a story to remember.

References

[1] https://www.theoi.com/Pontios/Skylla.html [2] https://www.linkedin.com/pulse/need-low-pupil-fill-euv-lithography-frederick-chen [3] J. Finders et al., Proc. SPIE 9776, 977619 (2016). [4] M. van de Kerkhof et al., Proc. SPIE 10143, 101430D (2017). [5] https://www.linkedin.com/pulse/stochastic-impact-defocus-euv-lithography-frederick-chen [6] https://www.linkedin.com/pulse/very-different-wavelengths-euv-lithography-frederick-chen [7] https://www.theoi.com/Pontios/Kharybdis.html [8] R. Miyazaki and P. Naulleau, Synchrotron Radiation News, 32(4), 2019: https://escholarship.org/uc/item/07h5f8vn [9] A. V. Pret et al., Proc. SPIE 10809, 108090A (2018). [10] L. van Look et al., Proc. SPIE 10809, 108090M (2018). [11] https://en.wikipedia.org/wiki/Odyssey
[post_title] => EUV faces Scylla and Charybdis [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => euv-faces-scylla-and-charybdis [to_ping] => [pinged] => [post_modified] => 2020-08-04 21:02:10 [post_modified_gmt] => 2020-08-05 04:02:10 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=289072 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [5] => WP_Post Object ( [ID] => 289030 [post_author] => 7546 [post_date] => 2020-08-02 10:00:35 [post_date_gmt] => 2020-08-02 17:00:35 [post_content] => Investors of capital, whether financial or human, utilize numerous methods to decide to participate with an early-stage / seed-stage technology company. The risks are, by definition, much higher than later stage companies pursuing investment in their Series B/C or beyond rounds. It is helpful to establish consistent “health appraisal” metrics and risk-factors for use in your decision-making process - whether you are investing at pre-seed or seed stage, as an individual or you are part of an investment group. In 1952, Dr. Virginia Apgar created a scoring system for doctors and nurses to assess newborns at birth. Medical professionals worldwide use the Apgar system to this day, to quickly help understand the baby’s condition immediately after birth. Low Apgar scores may indicate the baby needs special care, such as requiring special help with their breathing. This article introduces an equivalent scoring system to determine the health of an early-stage startup, with a different perspective, by examining key company mortality factors. The company Apgar scoring system is called Capgar-4T$ ™, establishing a standard for diagnostic measurement of the key vital signs of the company, projecting the potential survivability of the startup at the time of evaluation. The mortality factor analysis and Capgar-4T$ scoring can be carried out by 3rd parties / potential investors, or internally by the management team and key stakeholders.

Background

Dr. Apgar used her name as a mnemonic for each of the five health categories that a baby will score at birth, based on a scale of 0 to 2, as detailed in the table below. The Apgar is measured at the 1-minute and 5-minute mark from the birth. A score of 7 to 10 is considered normal for both the one-minute and five-minute Apgar tests. The Apgar score is not used to predict the newborn’s long-term health, behavior, or intelligence. It is solely a standard used by the obstetrical profession to quantify the health of the baby, at a specific point in time, to quickly summarize the health of the newborn against infant mortality. APGAR Point Scoring System

Capgar-4T$ System Overview

The health and viability of a startup is dependent on a wide variety of factors. Similar to the newborn Apgar scoring, the company-Apgar has been established to provide a standard measurement technique for diagnosing the health of the enterprise, or equivalently, their potential mortality rate. Capgar incorporates a new mnemonic, “4T$”, based on the 5 critical categories used in determining the company’s overall health. The mnemonic represents the key vital signs at a point in time, with the 4 T’s representing: Team Target Market Technology Traction and $ representing the financial aspects of the company CAPGAR Distinct from the use of Apgar scoring just for the newborn’s earliest moments, the Capgar-4T$ system can assist in periodic identification of operational deficiencies, useful for the potential investor, as well as for management team and the board of directors. Scoring for the 4T$ measurement is similarly structured to use the same Apgar 0, 1 or 2 rating scale. For the “gray” areas in the categories, it adds 0.5 and 1.5 scores for each of the five categories. Although the Capgar-4T$ appraisal system can be completed by a single individual, it is suggested to have 2 to 3 additional “evaluators” involved in reviewing the company’s health. After discussion and deliberation, the judicial council can decide on the consensus score for that category. The final Capgar-4T$ score will range between 0 and 10. Note: If any of the 4T$ categories results in a score of “0”, the actual category value should be entered as “-2”. To have a meaningful use of the Capgar-4T$ system, the appraisal process should be of an appropriate duration to adequately cover the 4T$ categories for all involved - interacting with the startup management team and if in place, outside members of the board and key advisors.

Summary

The measurement and analysis of complex systems, whether human or manmade, is inherently difficult. The Capgar-4T$ system can be of value in evaluating an early-stage business, as part of your decision-making process for involvement: - To invest your capital o whether it be financial or human (your time & efforts vs. ROI) - To use in a strategic analysis and planning process o For the senior management team, board and key stakeholders to use in conjunction with Corporate-level KPI reviews A key aspect in reviewing the 4T$ scoring results is to understand the significance of a score of zero or .5 in any of the categories. In some cases, a zero could dictate urgent and immediate “life-saving” measures need to be taken, e.g. running out of cash to make payroll. Whereas a low score of 0 or 0.5 in Traction can point to either the startup not undertaking market discovery and validation, ala the “Lean Canvas” model, or the fact that the startup has a “solution looking for a problem”. The Capgar-4T$ ™ System delivers an effective diagnostic system for the analysis of investment opportunities and offers a consistent process for use in establishing overall viability of the startup, at a specific point in time. The resulting 4T$ score can be used to identify the critical strategic issues to be addressed for future business growth and vitality, and ultimately the decision to invest and/or participate as a member of the startup. The relationship between innovation drivers and business cycles is a well understood symbiotic concept. Investment in early-stage technology startups, aka “Frontier Tech” or “Hard Tech”, is vital to continued innovation. A key element for renewed business growth will be in the analysis and selection of startups that are the most promising investment options, utilizing an industry standard rating system for high-tech ventures, as presented with the 4T$ process. The following sections provide a list of possible mortality-influencing topics for consideration prior to final category scoring. At the conclusion of each section there are category scorecards for each of the 4T$ rating levels. Also included is the 4T$ Summary Worksheet to summarize the analysis. CAPGAR Team • First and foremost, is there a team or is it just a founder – moonlighting or full-time • Commitment: Bootstrapping with founding team’s personal money or using O.P.M. • Leader(s) – key personality traits; Winslow tested? • Why? Is it a Lifestyle business or lusting after a Beverly hills mansion? • Grit-level? See Duckworth for “passion and perseverance” • Sacrifices, if any – e.g. payroll details • Organizational structure • Who’s the “rainmaker”? • Relevant industry experience and reputation; references • Who, where, how many on the team; employees, advisors and contractors • Team esprit-de-corps / “fire-in-the-belly” • History together / track record • Equity split CAPGAR Team Scoring Guidelines CAPGAR Target Market • Clear articulation of the “burning problem and it’s economic and/or social impact” • “Economic $$ Benefit” of using the products • Solutions / alternatives • Top-5 competitors: financials, market share, market cap to sales ratio • Trends and CAGR of the segment(s); where is it on the Gartner Hype-Cycle ? • Competitive landscape, strategy and advantage (Porter); Blue Ocean opportunity? • Sales cycle details, purchasing / budgeting cycles • M&A history in the target market • Position in the target market “value” stack • First-mover vs fast-follower CAPGAR Target Market Scoring Guidelines CAPGAR Technology • Features and Advantages vs existing alternatives • Evolutionary (10x to 20x faster, better, cheaper) or revolutionary (search engines, streaming video) • Patent position • Production status: “slide-ware” … pre-alpha … to FCS • 3rd party components: licensed or open-source • Supply chain – vendors, process, lead times, location, multiple sources • Impact on the overall industry or segment(s) CAPGAR Target Technology Scoring Guidelines CAPGAR Traction• Sales results (LOI, MOU, Conditional POs) are the best proxies to determine: o The gestation stage of the start-up o That the company’s messaging/branding/marketing mix is resonating in the market o Product-Market fit • Target account list; shotgun or rifle-shot? • Sales collateral: o presentations, datasheets, whitepapers, o press coverage, customer case studies & testimonials • Sales Channels – direct or in-direct, territories • # of customers, # of evaluation sites, “out-of-the box” experience • Impact on the industry of business success • Prospecting meeting summaries; identification of key learnings CAPGAR Target Traction Scoring Guidelines CAPGAR Financial • Bookings and Billings History • Current cash balance vs burn-rate • Financial savviness: e.g. do they know why it is called “top-line” growth? • 1st level: does leader know how to read a P/L, B-S • P/L and Cash-flow projections – believable or naive • Understand COGS and optimization paths • Knowledge of the supply chain, capacity constraints, risk planning • Investor specifics – equity: SAFE, Convertible Notes, CAP table, warrants, ESOP • Litigation historical or pending • Engagement status with Angels, VCs and CVCs CAPGAR Target FInancial Scoring Guidelines CAPGAR Summary Worksheet COMPANY NAME: _______________________ DATE: ________________ PARTICIPANTS: _____________________________________________________________ CAPGAR Target Point System Guidelines CAPGAR Scoring Guidelines

References

  1. Steve Blank youtube video on the Principles of Lean
  2. Winslow personality testing
  3. Angel Duckworth on “Grit”
  4. Gartner Hype Cycle
  5. Porter on Competitive Strategy
  6. Blue Ocean Strategy
Richard Curtin The Capgar Group [post_title] => Appraising the Health of Early-stage Startups [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => appraising-the-health-of-early-stage-startups [to_ping] => [pinged] => [post_modified] => 2020-08-04 21:05:12 [post_modified_gmt] => 2020-08-05 04:05:12 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=289030 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 1 [filter] => raw ) [6] => WP_Post Object ( [ID] => 289145 [post_author] => 28 [post_date] => 2020-08-02 08:00:03 [post_date_gmt] => 2020-08-02 15:00:03 [post_content] =>
John Cooley DeepChip.Just a little funny history, after I started my Silicon Valley Blog in 2009 several EDA CEOs bullied me into creating my own site to compete with John Cooley's DeepChip.com. Competition is good, right? One good friend and CEO even suggested I call it DeepNenni.com. Thankfully we came up with something a little less awkward, SemiWiki.com. Ten years later here we are the #1 EDA/IP/Foundry portal and the rest as they say is history. SemiWiki DAC2020 Blogs And speaking of disruptive changes, our beloved DAC this year was virtual. The good news is that it became a world wide event that anybody could join. As a result the DAC attendance numbers jumped dramatically. The bad news is that being the first virtual DAC it had some technical challenges, but nothing that could not be overcome for the next DAC. Which brings us to some very important questions: Should DAC be Virtual? and if so how do we improve it? So let's all engage with John and let him do the dirty work for the greater good of the semiconductor ecosystem, absolutely. My anonymous answers are below: From: John Cooley <jcooley@zeroskew.com> Subject: Cooley's 11 quickie questions about Virtual DAC'20 Date: July 30, 2020 at 9:74:18 AM PDT Please, can you take 5 minutes to answer these quickie questions so I can compile your ANONYMOUS answers for all to see?- John Cooley of DeepChip And yes, ALL replies are anonymous! And yes, ALL replies are anonymous! And yes, ALL replies are anonymous! ----------------------------------------------------------------- 1. WHO ARE YOU? I am a [CHOOSE ONE]: an EDA user, an EDA vendor, an EDA academic, or other 36 year EDA/IP/Foundry professional . ----------------------------------------------------------------- 2. Did you attend Virtual DAC'20 last week? [Yes or No] Yes ----------------------------------------------------------------- 3. Did you attend prior DAC'17 and/or DAC'18 and/or DAC'19? [SPECIFY] Yes, I have attended the last 36 DACs starting with the one in Albuquerque New Mexico in 1984. ----------------------------------------------------------------- 4. What Virtual DAC'20 *days* did you attend last week? [SPECIFY ALL] DAC Mon, DAC Tues, DAC Wed, DAC Thur, DAC Fri All ----------------------------------------------------------------- 5. Did you go to any of the DAC'20 papers? [Yes or No]  If yes, which specific DAC papers? [List Topics]  Overall were the DAC'20 papers useful?  [Yes or No] Yes, Most, Yes PLEASE COMMENT AS TO WHAT YOU THOUGHT OF THE DAC'20 PAPERS I reviewed most of the DAC papers. In comparison to a technical conference like the VLSI Symposium the DAC papers were a bit light on technical content. In comparison to vendor centric conferences I would say the DAC papers were above average in technical content. ----------------------------------------------------------------- 6. Did you go to any of the DAC'20 panels? [Yes or No]  If yes, which specific DAC panels? [List Topics]  Overall were the DAC'20 panels interesting?  [Yes or No] Yes, most of them, yes PLEASE COMMENT AS TO WHAT YOU THOUGHT OF THE DAC'20 PANELS Most of the panels were good but some had technical problems with sound and internet connections. A couple of the panels were pretty bad but I'm sure they know who they are so I won't panel shame them. ----------------------------------------------------------------- 7. Did you go to any of the DAC'20 keynotes? [Yes or No]  If yes, which specific DAC keynotes? [List Topics]  Overall were the DAC'20 keynotes interesting?  [Yes or No] Yes, most of them, Yes. PLEASE COMMENT AS TO WHAT YOU THOUGHT OF THE DAC'20 KEYNOTES The keynotes were pretty good. Wally Rhines, The Most Interesting Man in EDA, had the best keynote. The worst one for me was Calista Redmond, CEO, RISC-V Foundation. Blah blah blah very little content blah blah blah. ----------------------------------------------------------------- 8. Did you go to any of the Exhibitor booths? [Yes or No]  If yes, which specific vendor booths? [List Vendors]  Overall were the DAC'20 Vendor booths useful?  [Yes or No] No PLEASE COMMENT AS TO WHAT YOU THOUGHT OF THE DAC'20 VENDOR BOOTHS I did ask for booth feedback from the companies I work with and it was not all good. There is work to be done there for sure. The key missing ingredient is the ability to set up face-2-face customer meetings in a centralized location. This can save a lot of time and money for companies big and small.  Most DAC exhibitors I have worked with in the past can fill up DAC meeting rooms for 3 days straight, no problem. ----------------------------------------------------------------- 9. Did you encounter any serious technical difficulties while on the Virtual DAC'20 web site? [Yes or No]  If yes, please indicate which day and explain the problems you ran into. Yes, audio, delays, disconnections, your basic IT challenges. ----------------------------------------------------------------- A. How would you rate your Virtual DAC'20 experience?  [CHOOSE ONE] "Mostly GOOD", "Mostly NEUTRAL", "Mostly BAD" Mostly good. CAN YOU PLEASE COMMENT ON WHY YOU FEEL THAT WAY? I think given the short time the DAC Committee had to prepare for the new venue and under pandemic conditions the results were surprisingly good. ----------------------------------------------------------------- B. If for COVID-19 reasons they must do a Virtual DAC next year in 2021 would you attend? [Yes or No or Unsure] Yes, Definitely. CAN YOU PLEASE COMMENT ON WHY? I actually prefer the virtual format where everything is recorded and available for viewing afterwards. Even if DAC is live next year I think it should be recorded and open to all. Attendance is everything! Down with profits! Up with attendance! ----------------------------------------------------------------- And again, ALL replies are anonymous! And again, ALL replies are anonymous! And again, ALL replies are anonymous! - John Cooley of DeepChip.com
[post_title] => Cooley's 11 Quickie Questions About Virtual DAC'20 Last Week! [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => cooleys-11-quickie-questions-about-virtual-dac20-last-week [to_ping] => [pinged] => [post_modified] => 2020-08-04 21:07:54 [post_modified_gmt] => 2020-08-05 04:07:54 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=289145 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 3 [filter] => raw ) [7] => WP_Post Object ( [ID] => 289010 [post_author] => 14 [post_date] => 2020-08-02 06:00:11 [post_date_gmt] => 2020-08-02 13:00:11 [post_content] => Robert Maire Lam reported strong beat & guide, memory returns- China trade and Covid impact near zero- Looking at a strong H2 with WFE in mid $50B+-

Back on Cruise Control

Lam reported a great quarter, easily beating expectations coming in at $2.79B and $4.78 in non GAAP EPS- Guide is for $3.1B +-$200M and EPS of $5.06+-$0.40- Margins and all financial metrics were equally good- Guidance for 2020 WFE spending is for mid to high $50B range- December is also projected to be a "strong" quarter so the second half looks very solid for 2020 for Lam- China Trade & Covid = No Worries- Management said there was no impact from China trade restrictions on Lam's sales. Lam appears to be somewhat "self policing" in terms of looking at non-military end use customers in China. We find it a bit hard to believe that zero military end use chips are made on Lam equipment sold in China. But apparently government regulators either don't care or aren't enforcing any restrictions. Aside from some slight increases in shipping costs Lam said there was no ongoing impact from Covid as they have worked through most all the supply chain issues. Lam said they have pretty much caught up from prior supply chain constraints and it sounds like they are no longer supply constrained as they are confident in $3B + in sales. There may be some concern that at 34% of Lam's business, even bigger than Korea at 32%, China may be loading up on equipment prior to more restrictions placed on sales. Lam does not believe that there is some front end loading in China and seems to have no fear of any future restrictions. China Trade restrictions are obviously a "paper tiger" - zhilaohu (纸老虎/紙老虎). The current administration talks a good game about getting tough on China and restricting US technology from being used in China for anti US purposes. Reality is quite different as Lam seems to have seen zero impact and expects zero impact in the future from China trade restrictions as there aren't any. It almost sounds as if restrictions don't exist. This is obviously a far cry from ASML that saw its EUV scanner sale to China halted. If ASML was listening to the Lam call they should be screaming. China , at 34% of Lams business was mostly to Chinese domestic customers not even foreign companies facilities in China. China trade restrictions have turned out to be nothing more than a "paper tiger" with no teeth and zero impact, orchestrated more for show than reality. Memory is climbing back to 61% of business..... Memory, read that as Samsung NAND, is clearly coming back. Though the spend is still far from 2018 insane levels they are much higher than the low trough we saw last year. DRAM is slower to come back as expected but is still recovering nicely. Technology transitions are helping DRAM and clearly overall demand for memory in general remains solid for work from home and school at home etc;. So far memory makers appear to be better in their spend patterns and haven't gone crazy again (perhaps the memory debacle is still fresh in their minds) At 61% of business, memory has not hit a level that should set off alarm bells. We would start to get concerned above 70%, and bells should go off above 80%.

2H2020 Looks good

The balance of 2020 looks very good according to Lam. This is somewhat juxtaposed to ASML that saw a sharp order drop off. There are likely two factors for the difference; Lam is more memory driven than ASML and ASML is probably a longer term leading indicator given the lead times of litho tools versus the "turns" business of etch and dep tools. Guidance on WFE spending suggests that December should be up again from September

The Stock

The earnings report was very positive and allayed any fears of Covid and China. The stock will obviously react very positively and has been on a tear since the depths of Covid concerns. With Covid and China behind us it seems like clear sailing for Lam at least for the balance of the year. For other stocks such as AMAT and KLAC we would expect similar positive tone as we would imagine that their China and Covid issues have been minimized as well. Semiconductor Advisors Semiconductor Advisors on SemiWiki [post_title] => LRCX Strong Beat and Guide [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => lrcx-strong-beat-and-guide-lam [to_ping] => [pinged] => [post_modified] => 2020-08-04 21:11:06 [post_modified_gmt] => 2020-08-05 04:11:06 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=289010 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [8] => WP_Post Object ( [ID] => 288836 [post_author] => 11830 [post_date] => 2020-07-31 10:00:36 [post_date_gmt] => 2020-07-31 17:00:36 [post_content] =>

There was a “research reviewed” panel on Thursday at DAC entitled Shortening the Wires Between High-Level Synthesis and Logic Synthesis. Chaired by Alric Althoff of Tortuga Logic, the panel explored methods to deal with wire delays in high-level synthesis and logic synthesis. The four speakers and their focus were:

  • Licheng Guo, PhD student at UCLA: synthesized circuits with HLS lead to frequency degradation due to long signal broadcasts - a set of easy to implement approaches to address this is discussed
  • Po-Chun Chien, Research Assistant at National Taiwan University: a method to multiplex pins in FPGAs through structural and functional circuit folding
  • Luca Amaru, Senior R&D Manage at Synopsys: a SAT-sweeper method for simplifying logic networks
  • He-Teng Zhang, Researcher at National Taiwan University: a scalable approach to fanout-bounded logic synthesis

Luca Amaru, a senior R&D manager at Synopsys presented a very interesting approach to logic synthesis optimization that I’d like to cover in this post. Before getting that, there was a poll question for this event - Does your organization use high level synthesis (HLS) today? Below are the results of the poll. It appears HLS is catching on.

HLS Poll Results

Luca began by explaining that the work he was presenting is a collaboration between Synopsys, EPFL in Lausanne, Switzerland and UC Berkeley. First, a bit of background on the core concept of the presentation – SAT, or Boolean Satisfiability. This technique has application in formal verification and logic synthesis. The task at hand is to prove that two gates in a circuit are functionally equivalent. This means that there is no input pattern such that the two assume different values. The results of proofs like this can drive optimization approaches.

As proving functional equivalence is very complex, run times can explode in an exponential manner. Methods to manage this problem and why equivalence is important for logic synthesis were the focus of Luca’s presentation. The options available to solve a problem like this were enumerated by Luca as follows:

  • Exhaustive simulation
  • Partial simulation with binary decision diagram (BDD) construction
  • Structural hashing of and-inverter graphs (AIG)
  • Partial simulation with SAT solving, or SAT sweeping

Luca explained that the first three methods have limitations, either because of extreme run times, lack of scalability or inability to look at all aspects of the problem. The last approach, called SAT sweeping was chosen because of its scalability and completeness. It turns out that using this technique for logic synthesis has some additional benefits. These circuits are not as deep as those seen in general formal applications and there aren’t as many equivalencies to prove, allowing for optimization of run times.

Luca then got into some details of the “sweeping” aspect of the approach. An initial random simulation can be run to disprove the “easy” candidates. This still leaves you with a lot of gate pairs. Calling a SAT solver for each will still result in very long run times. Further optimization can be done by grouping gates in equivalence classes that have the same simulation pattern. The next decision is how to process the network, top-down or bottom-up. It turns out the top-down approach works better for logic synthesis. Prior simulation patterns and heuristics can be added to optimize the process even further. All of this leads to the idea of a SAT preprocessing engine.  Characteristics of this approach were summarized in the list, below.

SAT Pre Processing Engine

Luca concluded with benchmark results on 11 varying size design examples. A speed-up of 1.5X and even an order of magnitude is possible with this approach. This allows for much better synthesis results post P&R thanks to more effective use of the run-time budget.

The information reviewed by all presenters in this session is relevant and useful. If you have a DAC pass, I recommend you check out this session. I believe the DAC sessions will be available for an extended period of time. You can find this session on Shortening the Wires Between High-Level Synthesis and Logic Synthesis here.

[post_title] => Synopsys Presents SAT-Sweeping Enhancements for Logic Synthesis [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => synopsys-presents-sat-sweeping-enhancements-for-logic-synthesis [to_ping] => [pinged] => [post_modified] => 2020-07-28 10:36:13 [post_modified_gmt] => 2020-07-28 17:36:13 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=288836 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [9] => WP_Post Object ( [ID] => 288721 [post_author] => 28 [post_date] => 2020-07-31 06:00:26 [post_date_gmt] => 2020-07-31 13:00:26 [post_content] => Ljubisa Bajic of TenstorrentLjubisa Bajic is the CEO of Tenstorrent, a company he co-founded in 2016 to bring to market, full-stack AI compute solutions  with a compelling new approach to address the exponentially growing complexity of AI models.

What is Tenstorrent?

Tenstorrent is a next-generation computing company that is bringing to market the first dynamic artificial intelligence architecture that facilitates scalable deep learning. Tenstorrent’s vision is to enable Artificial General Intelligence - a more sentient capability. Tenstorrent’s mission is to build the substrate for radically more efficient and scalable AI computation while simplifying software development. This would enable larger, exponentially more complex models to be handled and deployed faster and cost-effectively and create ubiquitous, advanced AI solutions from the edge to the cloud. The first announced product from Tenstorrent is called Grayskull - a processor that has benchmarked as the fastest single-chip AI solution and that will be available to ship to customers by Q4’20

Why did you see a need for another AI company? 

AI is a burgeoning field, and the computation required for its use cases is evolving rapidly. We are still early in the growth phase where newer, emerging models and use cases are overwhelming even State of the Art (SoTA) architectures and implementations. The use cases and specializations continue to grow. What everyone agrees on, however, is that the complexity of the problems, and hence models, continues to explode; that there is a huge diversity of use cases that need specialized solutions to deliver on the Service Level Agreements (SLA); and that the cost of delivering the next generation of AI solutions - or their TCO - will grow exponentially if we stick to current SoTA. Moreover, there are solutions that can thrive in some use cases, but fail in others. For example, specialized AI acceleration solutions that do well at image processing today might not do well for natural language processing or recommendation engines. GPU-based solutions may be better than CPU-based for quite a few use cases, but not versus specialized AI acceleration. You might be able to get to the answer for some models, but it may take too long (years) with a CPU and be prohibitively expensive, even with specialized hardware. The cost estimates for training the recently announced GPT-3 are exorbitant and would take an inordinately long time even with specialized acceleration. Some cloud companies offer hundreds of different SKUs of compute to fit the right requirements. Simply put, while great strides have been made by various players in the AI space, the richness of the problem, when paired with  emerging requirements, means that new approaches, architectures and solutions are needed to solve the future problems. Just as there were a slew of search companies in the late ‘90s, Google came in much later with a novel approach that revolutionized search.

What makes Tenstorrent different?

First, Tenstorrent has rethought the computation problem from the software point of view, as we believe the future software 2.0 will not be written in the traditional fashion but at a higher level and a lot of lower-level code generation happens automatically. So greater intelligence is needed for the compute elements to be able to handle allocation, communication and other such adjustments at run time. Tenstorrent starts off with the idea of making a new system much more brain-like. A human brain operates at (effectively) less than 20W - considering the amount it computes, the efficiency is sky high. A lot of that efficiency comes from the human ability to drop unnecessary data and computation, especially with learned information. With Tenstorrent’s conditional execution, you can achieve similar gains by speeding up some compute models by orders of magnitude. This is built on top of a hardware architecture that is focused on compute efficiency - finding the right granule size for the most efficient compute, memory and storage.  The compute element is designed for optimal compute integrating lossless compression for storage efficiency, and network processing which provides communication efficiency.  It is designed to efficiently scale up  to 100K nodes. The software approach is also geared to allow massive scale and takes the pressure off the Ahead of Time (AOT) scheduling by having run time control and firmware that can control compute, storage allocation and other activities programmably. This is a non-trivial problem that has been handled in a novel way. The software tooling supports not only compiler generality for neural networks finding the balance between ahead of time (AOT) scheduling and run-time allocation for max efficiency, it’s versatile enough to handle non-neural pre- or post- processing steps for a more holistic, simpler, overall solution from a programmer’s point of view. To summarise, the solution is a scalable distributed network computer that brings in the non-linear benefits of conditional computing, performance that is decoupled both from the size of the model and batching, combined with a software approach for simplification of deployment across a wide variety of AI models.

What is a Tensix? 

The Tensix is the smallest self-contained constituent unit of the network-computer that forms the base of Tenstorrent’s solutions. The Tensix consists of a packet engine that routes packets into and out of the Tensix, a compute engine, 1MB of SRAM and five RISC processors that give it the unique, granular  programmability. The compute engine consists of a SIMD engine and a Tensor/matrix engine that gives its name. Tenstorrent made a conscious decision to move away from large matrix multiplication units with lots of parallelism, but little control over what gets computed and how. The Tensix has a compact granule that, in its network, can easily be filled with parallel tasks, and yet the processors help regulate conditional execution by autonomous ability to stop compute on threads that have reached their optimum level of accuracy. It still packs a lot of punch. The SIMD unit can do vector math for AI and non-AI calculations such as signal processing. It can support numerous floating-point formats. The Tensor unit can accelerate both convolutional and GEMM type operations. With 1MB of SRAM, there is plenty of space for the computation to occur without stressing the rest of the Network on Chip or DRAM - once the necessary information is loaded.

What applications / use cases is Grayskull best suited for? Wormhole? 

Grayskull is specified to be ideal for inference. Tenstorrent’s approach of having the right amount of onboard SRAM and efficient, high-bandwidth DRAM gives the product a great deal of flexibility and versatility. For inference, its strengths are not only in its performance, but in its low latency, which is important for especially real-time response.  In its production 65W form factor PCIe card, it does extremely well at inferencing on Natural Language Processing models like BERT and GPT demonstrating scores that are over five times the performance of GPU-based alternatives in the same power envelope. Grayskull represents the world’s first conditional compute processor, which can enable additional performance gains over the previously stated 5X.. It is also well-suited for vision-recognition systems and recommender networks - which are currently critical in various verticals. Grayskull can also be used for training in some configurations, however, the follow-on product - Wormhole - which will be sampling in early 2021 and shipping in the second half of the year, is designed specifically to improve training capabilities by introducing a much higher bandwidth memory system and coherence connectivity that will scale to much larger accelerator configurations with few host CPUs.

What trends in AI / ML do you find most interesting right now? How do you think they will play out in the coming years? 

The reality is that AI is transforming almost every industry. Deep learning will continue to become more sophisticated and will eventually enable machines to piece together new ideas through old ones - potentially even “expressing” emotion. But even without that, AI is becoming increasingly more useful as people and industries rely more on it for applications like drug and vaccine discovery or using ECGs or other usual reports to predict health issues or provide early diagnosis. This requires more sophisticated models. AI systems are still just beginning to do activities that humans take for granted - the dexterity of hands to handle delicate objects, the ability to drive vehicles, conduct conversations, realize context-based comprehension and so many more. The models are growing exponentially in size and complexity - which puts a severe strain on the compute resources required to just keep up. The result is that very few companies can afford to train on these sophisticated models,  which limits the proliferation of these capabilities. There will be new techniques that simplify software and models but, even with that, models will continue to grow. So we see a trend toward greater innovation in architecture, software and hardware development to achieve the next growth phase.

What’s the future for Tenstorrent? 

At its heart, Tenstorrent is an AI systems company. We are quite excited about the first generation of products hitting the performance target and demonstrating the capabilities of our base architecture and the promise of conditional execution. We are learning very quickly on customer use cases in AI inference and clearly seeing the pain points that we can solve - needless to say, performance and TCO figure prominently, as does software simplification. We are looking forward to getting our products into market and then accelerate our roadmap on both hardware and software to revolutionize training with our Wormhole product’s conditional computation and hyper-efficient scaling. The Grayskull inference solution  is beginning evaluations and can show a multi-fold improvement with models trained on other platforms. However, if the training platform is also Tenstorrent based, you will see an even bigger gain at both ends.

Why Toronto vs. Silicon Valley? 

That is a fair question. The founding team and most of the core engineering team were educated in Toronto and worked there prior to Tenstorrent. Toronto is North America’s fourth largest city and is known as a very cosmopolitan metropolis with a very thriving cultural scene and very respectable presence in the hardware and software world, being a strong center for graphics and parallel compute. However, over the last two decades, Toronto has become the epicenter of artificial intelligence outside Silicon Valley. With universities that pioneered deep learning, three Turing award winners that all taught there and created a groundswell of thinkers, implementers and engineers. It also has the fourth largest pool of tech resources in North America, but is particularly concentrated with talent in artificial intelligence. It is fertile recruiting ground for some of the top talent, and of course a great city to live in.

About Tenstorrent:

Tenstorrent is a next-generation computing company that is bringing to market the first dynamic artificial intelligence architecture facilitating scalable deep learning. The company's mission is to address the rapidly growing computing demands for training and inference, and to produce highly programmable and efficient AI processors with the right computational density designed from the ground up. Headquartered in Toronto, Canada, with U.S. offices in Austin, Texas, and Silicon Valley, Tenstorrent brings together experts in the field of computer architecture, complex processor and ASIC design, advanced systems and neural network compilers, and who have developed successful projects for companies including Altera, AMD, Arm, IBM, Intel, Nvidia, Marvell and Qualcomm. Tenstorrent is backed by Eclipse Ventures and Real Ventures, among others. For more information visit www.tenstorrent.com. [post_title] => CEO Interview: Ljubisa Bajic of Tenstorrent [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => ceo-interview-qa-with-ljubisa-bajic-of-tenstorrent [to_ping] => [pinged] => [post_modified] => 2020-08-04 21:13:16 [post_modified_gmt] => 2020-08-05 04:13:16 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=288721 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) ) [post_count] => 10 [current_post] => -1 [in_the_loop] => [post] => WP_Post Object ( [ID] => 289069 [post_author] => 16 [post_date] => 2020-08-04 14:00:20 [post_date_gmt] => 2020-08-04 21:00:20 [post_content] => What do you do next when you've already introduced an all-in-one extreme edge device, supporting AI and capable of running at ultra-low power, even harvested power? You add a software flow to support solution development and connectivity to the major clouds. For Eta Compute, their TENSAI flow. The vision of a trillion IoT devices only works if the great majority of those devices can operate at ultra-low power, even harvested power. Any higher, and the added power generation burden and field maintenance make the economics of the whole enterprise questionable. Alternatively, reduce the number of devices we expect to need and the economics of supplying these devices looks shaky. The vision depends on devices that are be close to self-sufficient in power. Adding to the challenge, we increasingly need AI at the extreme edge. This is in part to manage the sensors, to detect locally and communicate only when needed. When we do most of what we need locally, there's no need to worry about privacy and security. Further, we often need to provide real-time response without the latency of a roundtrip to a gateway or the cloud. And operating expenses go up when we must leverage network or cloud operator services (such as AI). All-in-One Extreme Edge

All-in-one extreme edge

Eta Compute has already been leading the charge on ultra-low power (ULP) compute. They do this building on their proprietary self-timed logic and continuous voltage and frequency scaling technology. In partnerships with Arm for Cortex-M and NXP for CoolDSP, they already have established a multi-core IP platform for ULP AI at the extreme edge. This runs typically below 1mW when operating and below 1uA in standby. It can handle a wide range of use cases – image, voice and gesture recognition, sensing and sensor fusion among other applications. They can run any neural networks and support the flexible quantization now commonly seen in many inference applications.

TENSAI Flow

Semir Haddad (Sr Dir Product Marketing) told me that Eta Compute’s next step along this path is to provide a full-featured software solution to complement their hardware. This is designed to maximize ease of adoption, requiring little to no embedded programming. They will supply reference designs in support of this goal. The software flow (in development) is called TENSAI. Networks are developed and trained in the cloud in the usual way, eg through TensorFlow, reduced then through TensorFlowLite. The TENSAI compiler takes the handoff to optimize the network to the embedded Eta Compute platform. It also provides all the middleware: the AI Kernel, FreeRTOS, hardware abstraction layer and sensor drivers. With the goal, as I said before, that not a single line of new embedded code should be needed to bring up a reference design.

Azure, AWS, Google cloud support

Data collection connects back to the cloud through a partnership with Edge Impulse (who I mentioned in an earlier blog). They support cloud connections with all the standard clouds – Azure, AWS and Google Cloud (he said they see a lot of activity on Azure). Semir stressed there is opportunity here to update training for edge devices, harvesting data from the edge to improve accuracy of abnormality detection for example. I asked how this would work, since sending a lot of data back from the edge would kill power efficiency. He told me that this would be more common in an early pilot phase, when you’re refining training and not so worried about power. Makes sense. Semir also said that their goal is to provide a platform which is as close to turnkey as possible, except for the AI training data. They even provide example trained networks in the NN Zoo. I doubt they could make this much easier. TENSAI flow is now available. Check HERE for more details. [post_title] => All-In-One Extreme Edge with Full Software Flow [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => all-in-one-extreme-edge-gets-tensai-software-support [to_ping] => [pinged] => [post_modified] => 2020-08-04 12:51:57 [post_modified_gmt] => 2020-08-04 19:51:57 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=289069 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [comment_count] => 0 [current_comment] => -1 [found_posts] => 7146 [max_num_pages] => 715 [max_num_comment_pages] => 0 [is_single] => [is_preview] => [is_page] => [is_archive] => [is_date] => [is_year] => [is_month] => [is_day] => [is_time] => [is_author] => [is_category] => [is_tag] => [is_tax] => [is_search] => [is_feed] => [is_comment_feed] => [is_trackback] => [is_home] => 1 [is_privacy_policy] => [is_404] => [is_embed] => [is_paged] => [is_admin] => [is_attachment] => [is_singular] => [is_robots] => [is_favicon] => [is_posts_page] => [is_post_type_archive] => [query_vars_hash:WP_Query:private] => 443e714a5290f77b0df02e2f78f50560 [query_vars_changed:WP_Query:private] => [thumbnails_cached] => [stopwords:WP_Query:private] => [compat_fields:WP_Query:private] => Array ( [0] => query_vars_hash [1] => query_vars_changed ) [compat_methods:WP_Query:private] => Array ( [0] => init_query_flags [1] => parse_tax_query ) [tribe_is_event] => [tribe_is_multi_posttype] => [tribe_is_event_category] => [tribe_is_event_venue] => [tribe_is_event_organizer] => [tribe_is_event_query] => [tribe_is_past] => [tribe_controller] => Tribe\Events\Views\V2\Query\Event_Query_Controller Object ( [filtering_query:protected] => WP_Query Object *RECURSION* ) )

All-In-One Extreme Edge with Full Software Flow

All-In-One Extreme Edge with Full Software Flow
by Bernard Murphy on 08-04-2020 at 2:00 pm

Obstacles to Edge AI min

What do you do next when you’ve already introduced an all-in-one extreme edge device, supporting AI and capable of running at ultra-low power, even harvested power? You add a software flow to support solution development and connectivity to the major clouds. For Eta Compute, their TENSAI flow.

The vision of a trillion IoT… Read More


Cadence on Automotive Safety: Without Security, There is no Safety

Cadence on Automotive Safety: Without Security, There is no Safety
by Mike Gianfagna on 08-04-2020 at 10:00 am

Attack vectors and EDA countermeasures

One of the Designer Track at this year’s DAC focused on the popular topic of automotive electronics.  The title was particularly on-point, The Modern Automobile: A Safety and Security “Hot Zone”. The session was chaired by Debdeep Mukhopadhyay, a Professor at the Indian Institute of Technology in Kharagpur.

This special, invited… Read More


Combining AI and Advanced Signal Processing on the Same Device

Combining AI and Advanced Signal Processing on the Same Device
by Lauro Rizzatti on 08-04-2020 at 6:00 am

Figure 1 1

A lot has been written and even more spoken about artificial intelligence (AI) and its uses. Case in point, the use of AI to make autonomous vehicles (AV) a reality. But, surprisingly, not much is discussed on pre-processing the inputs feeding AI algorithms. Understanding how input signals are generated, pre-processed and used… Read More


Webinar – Designing and Verifying HBM ESD Protection Networks, August 11, 10AM PST

Webinar – Designing and Verifying HBM ESD Protection Networks, August 11, 10AM PST
by Tom Simon on 08-03-2020 at 10:00 am

Promo Ad 400x400 2

Every chip needs ESD protection, especially RF, analog and nm designs. Because each type of design has specific needs relating to IOs, pad rings, operating voltage, process, etc. it is important that the ESD protection network is carefully tailored to the design. Also because of interactions between the design and its ESD protection… Read More


EUV faces Scylla and Charybdis

EUV faces Scylla and Charybdis
by Fred Chen on 08-03-2020 at 6:00 am

EUV faces Scylla and Charybdis

It is now time for the EUV community to realize they are caught between the proverbial Scylla and Charybdis. In Greek mythology, the two monsters terrorized ships that were unlucky enough to pass between them. By avoiding one, you approached the other.

S for Scylla, or Stochastics

Scylla was a former beautiful nymph turned into

Read More

Appraising the Health of Early-stage Startups

Appraising the Health of Early-stage Startups
by Richard Curtin on 08-02-2020 at 10:00 am

APGAR Point Scoring System

Investors of capital, whether financial or human, utilize numerous methods to decide to participate with an early-stage / seed-stage technology company. The risks are, by definition, much higher than later stage companies pursuing investment in their Series B/C or beyond rounds. It is helpful to establish consistent “health… Read More


Cooley’s 11 Quickie Questions About Virtual DAC’20 Last Week!

Cooley’s 11 Quickie Questions About Virtual DAC’20 Last Week!
by Daniel Nenni on 08-02-2020 at 8:00 am

John Cooley DeepChip.

Just a little funny history, after I started my Silicon Valley Blog in 2009 several EDA CEOs bullied me into creating my own site to compete with John Cooley’s DeepChip.com. Competition is good, right? One good friend and CEO even suggested I call it DeepNenni.com. Thankfully we came up with something a little less awkward,

Read More

LRCX Strong Beat and Guide

LRCX Strong Beat and Guide
by Robert Maire on 08-02-2020 at 6:00 am

Robert Maire

Lam reported strong beat & guide, memory returns-
China trade and Covid impact near zero-
Looking at a strong H2 with WFE in mid $50B+-

Back on Cruise Control

Lam reported a great quarter, easily beating expectations coming in at $2.79B and $4.78 in non GAAP EPS-
Guide is for $3.1B +-$200M and EPS of $5.06+-$0.40-
Margins and all… Read More


Synopsys Presents SAT-Sweeping Enhancements for Logic Synthesis

Synopsys Presents SAT-Sweeping Enhancements for Logic Synthesis
by Mike Gianfagna on 07-31-2020 at 10:00 am

Screen Shot 2020 07 25 at 2.05.14 PM

There was a “research reviewed” panel on Thursday at DAC entitled Shortening the Wires Between High-Level Synthesis and Logic Synthesis. Chaired by Alric Althoff of Tortuga Logic, the panel explored methods to deal with wire delays in high-level synthesis and logic synthesis. The four speakers and their focus were:

  • Licheng
Read More

CEO Interview: Ljubisa Bajic of Tenstorrent

CEO Interview: Ljubisa Bajic of Tenstorrent
by Daniel Nenni on 07-31-2020 at 6:00 am

Ljubisa Bajic of Tenstorrent

Ljubisa Bajic is the CEO of Tenstorrent, a company he co-founded in 2016 to bring to market, full-stack AI compute solutions  with a compelling new approach to address the exponentially growing complexity of AI models.

What is Tenstorrent?

Tenstorrent is a next-generation computing company that is bringing to market the first… Read More