SemiWiki Podcast Banner

WP_Query Object
(
    [query] => Array
        (
        )

    [query_vars] => Array
        (
            [error] => 
            [m] => 
            [p] => 0
            [post_parent] => 
            [subpost] => 
            [subpost_id] => 
            [attachment] => 
            [attachment_id] => 0
            [name] => 
            [pagename] => 
            [page_id] => 0
            [second] => 
            [minute] => 
            [hour] => 
            [day] => 0
            [monthnum] => 0
            [year] => 0
            [w] => 0
            [category_name] => 
            [tag] => 
            [cat] => 
            [tag_id] => 
            [author] => 
            [author_name] => 
            [feed] => 
            [tb] => 
            [paged] => 0
            [meta_key] => 
            [meta_value] => 
            [preview] => 
            [s] => 
            [sentence] => 
            [title] => 
            [fields] => 
            [menu_order] => 
            [embed] => 
            [category__in] => Array
                (
                )

            [category__not_in] => Array
                (
                )

            [category__and] => Array
                (
                )

            [post__in] => Array
                (
                )

            [post__not_in] => Array
                (
                )

            [post_name__in] => Array
                (
                )

            [tag__in] => Array
                (
                )

            [tag__not_in] => Array
                (
                )

            [tag__and] => Array
                (
                )

            [tag_slug__in] => Array
                (
                )

            [tag_slug__and] => Array
                (
                )

            [post_parent__in] => Array
                (
                )

            [post_parent__not_in] => Array
                (
                )

            [author__in] => Array
                (
                )

            [author__not_in] => Array
                (
                )

            [post_type] => Array
                (
                    [0] => post
                    [1] => podcast
                )

            [ignore_sticky_posts] => 
            [suppress_filters] => 
            [cache_results] => 
            [update_post_term_cache] => 1
            [lazy_load_term_meta] => 1
            [update_post_meta_cache] => 1
            [posts_per_page] => 10
            [nopaging] => 
            [comments_per_page] => 50
            [no_found_rows] => 
            [order] => DESC
        )

    [tax_query] => WP_Tax_Query Object
        (
            [queries] => Array
                (
                )

            [relation] => AND
            [table_aliases:protected] => Array
                (
                )

            [queried_terms] => Array
                (
                )

            [primary_table] => wp5_posts
            [primary_id_column] => ID
        )

    [meta_query] => WP_Meta_Query Object
        (
            [queries] => Array
                (
                )

            [relation] => 
            [meta_table] => 
            [meta_id_column] => 
            [primary_table] => 
            [primary_id_column] => 
            [table_aliases:protected] => Array
                (
                )

            [clauses:protected] => Array
                (
                )

            [has_or_relation:protected] => 
        )

    [date_query] => 
    [queried_object] => 
    [queried_object_id] => 
    [request] => SELECT SQL_CALC_FOUND_ROWS  wp5_posts.ID FROM wp5_posts  WHERE 1=1  AND wp5_posts.post_type IN ('post', 'podcast') AND (wp5_posts.post_status = 'publish' OR wp5_posts.post_status = 'expired' OR wp5_posts.post_status = 'tribe-ea-success' OR wp5_posts.post_status = 'tribe-ea-failed' OR wp5_posts.post_status = 'tribe-ea-schedule' OR wp5_posts.post_status = 'tribe-ea-pending' OR wp5_posts.post_status = 'tribe-ea-draft')  ORDER BY wp5_posts.post_date DESC LIMIT 0, 10
    [posts] => Array
        (
            [0] => WP_Post Object
                (
                    [ID] => 295695
                    [post_author] => 11830
                    [post_date] => 2021-02-24 10:00:19
                    [post_date_gmt] => 2021-02-24 18:00:19
                    [post_content] => 

HCL Expands Cloud Choices with a Comprehensive Guide to Azure Deployment

HCL Compass is quite a powerful tool to accelerate project delivery and increase developer productivity. Last August I detailed a webinar about HCL Compass that will help you understand the benefits and impact of a tool like this. This technology falls into the category of DevOps, which aims to shorten the systems development life cycle and provide continuous delivery with high software quality. Scalability across the enterprise is a key factor for success here, so cloud migration is definitely a consideration. Recently, I detailed how HCL is assisting its users to get to the Amazon Elastic Compute Cloud. Vendor choice is definitely a good thing. I’m happy to report that HCL expands cloud choices with a comprehensive guide to Azure deployment.

Microsoft Azure, commonly referred to simply as Azure is a major force in cloud computing. Moving any enterprise application to the cloud provides significant benefits, including:

  • Lower costs
  • Increased agility
  • Reliable global delivery

There are some specific impacts that HCL’s migration guide cites (a link is coming). Some of these are worth repeating:

  • Cost effectiveness: VMs (virtual machines) deployed in the cloud remove the capital expense of procuring and maintaining equipment as well as the expense of maintaining an on-premises data center. These VMs can host instances of HCL Compass
  • Scalability: Estimating data center capacity requirements is very difficult. Over-estimation leads to wasted money and idle resources. Under-estimation degrades the business’s ability to be responsive. Cloud computing resources can easily and quickly be scaled up or down to meet demand. Of particular interest regarding this point, Azure provides autoscaling that automatically increases or decreases the number of VM instances as needed
  • Availability: Azure, like other cloud providers, invests in redundant infrastructure, UPS systems, environmental controls, network carriers, power sources, etc. to ensure maximum uptime. Most enterprises simply cannot afford this kind of scale

The guide from HCL provides everything you need to plan your HCL Compass deployment or migration in Azure. There’s a lot of items to consider, so having all this in one place is very useful. Here are just a few of the considerations that are addressed in the HCL guide:

Supported database platforms: Ensuring you are using the correct version of the required database software is key. Versions between on-premises and the cloud are discussed, along with recommendations on how to utilize an on-premises database for a cloud deployment. This latter discussion supports a hybrid environment.

Accessing the data: For a cloud deployment, the preferred method of data access is to utilize the HCL Compass web client. The specific browsers and versions to use are specified, along with the cautions and pitfalls of other approaches.

Requisite software: Along with Linux database versions, the required versions for installation software, Java, Windows and Linux are discussed.

Many other topics are explained in detail, including:

  • Performance and performance monitoring
  • Cross-server communication
  • Load balancing
  • SSL enablement
  • Single sign-on implementation
  • LDAP authentication
  • Multi-site implementation
  • EmailRelay considerations

A detailed discussion of migration considerations is also presented, along with sample implementation scenarios. One scenario treats HCL Compass and the database in Azure. The other treats HCL Compass in Azure with the database on-premises. All-in-all, this guide provides a complete roadmap to implement HCL Compass in Azure. I can tell you from first-hand experience that cloud migration can be challenging. Software is provisioned and managed differently in a cloud environment. As long as you understand those nuances, things go smoothly.

The migration guide provided by HCL helps you discover all those nuances. You can get your copy of this valuable guide here. Download it now and find out how HCL expands cloud choices with a comprehensive guide to Azure deployment.

[post_title] => HCL Expands Cloud Choices with a Comprehensive Guide to Azure Deployment [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => hcl-expands-cloud-choices-with-a-comprehensive-guide-to-azure-deployment [to_ping] => [pinged] => [post_modified] => 2021-02-19 13:11:23 [post_modified_gmt] => 2021-02-19 21:11:23 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=295695 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [1] => WP_Post Object ( [ID] => 295710 [post_author] => 16 [post_date] => 2021-02-24 06:00:13 [post_date_gmt] => 2021-02-24 14:00:13 [post_content] => Is it possible to find and prioritize holes in coverage through AI-based analytics on coverage data? Paul Cunningham (GM, Verification at Cadence), Jim Hogan and I continue our series on research ideas. As always, feedback welcome. Finding Large Coverage Holes

The Innovation

This month’s pick is Using Machine Learning Clustering To Find Large Coverage Holes. This paper was presented at Machine Learning for CAD, 2020. The authors are from IBM Research, Haifa, Israel. Improving coverage starts with knowing where you need to improve, especially where you may have significant holes. Getting to what you might call good scalar coverage (covered functions, statements, and the like) is fairly mechanical. Assertions provide a set of more complex checks on interdependencies, high value but necessarily low coverage. These authors look at cross-product checks, relationships between events, somewhat reminiscent of our first blog topic. It is important first to understand what the authors means by a cross-product coverage task. This might be say a <request,response> pair where <request> may be one of memory_read, memory_write, IO_read, IO_write and <response> may be ack, nack, retry, reject. Coverage is then over all feasible combinations. Events are assumed related through naming. In their convention, reg_msr_data_read breaks into {reg,msr,data,read} which is close to {reg,msr,data,write}, not quite as close to {reg,pcr,data,write}. (You could easily adapt to different naming conventions.) From these groups they run K-means clustering analysis to group features (reg, msr, etc). From these clusters, they build cross-product structures. This starts with sets of feature locations, counting from start and end of an event. Then finding anchors, most commonly occurring, and therefore likely most significant features in events (reg for example). The authors call groups of features falling between these anchors dimensions. Though not quite explicit in the paper, it seems these provide a basis for probable event combinations which ought to be covered. From that they can then monitor covered and non-covered events. Better yet, they can provide very descriptive guidance on which combinations they expected to see covered but did not.

Paul’s view

The depth of this paper can be easy to miss on a quick read. It’s actually very thought provoking and draws on ML techniques in text document classification to help with verification. Very cool! The verification methodology in this paper is based on “coverage events” represented as a concatenation of words, e.g. “reg_msr_data_read”. However, the paper would seem to be equally applicable to any meta-data in the form of semi-structured text strings – it could be debug messages for activity on a bus or even the names of requirements in a functional specification. The heart of the paper is a set of algorithms that cluster similar coverage events into groups, break apart the concatenation of words and then intelligently re-combine the words to identify other concatenations that are similar but as yet un-covered events. They use a blend of K-means clustering, non-negative matrix factorization (NMF), and novel code to do this. The paper is a bit thin on specifics of how K-means and NFR are applied, but the essence of the overall method still shines through and the reported results are solid. The more I think about this paper, the more the generality of their method intrigues me – especially the potential for it to find holes in a verification plan itself by classifying the names of functional requirements themselves. The approach could quite easily be added as an app to a couple of the coverage tools in our Cadence verification flow…a perfect opener for an intern project at Cadence - please reach out to me if you are reading this blog and interested.

Jim’s view

Paul made an interesting point (separately). At the block level people are already comfortable with functional coverage and randomization. But at the SoC level, Engineers typically use directed tests and don't have as good a concept of coverage. They want functional coverage at SoC but it’s too much work. Maybe this is a more efficient way to get a decent measure of coverage. If so, that would definitely be interesting. I see it as an enhancement to existing verification flows, not investable as a standalone company, but certainly something that would be interesting as a quick acquisition. This would follow a proof of concept of no more than a month or so – a quick yes/no.

My view

Learning techniques usually focus on pure behaviors. As Paul suggests, this method adds a semi-semantic dimension. It derives meaning from names which I think is quite clever. Naturally that could lead to some false positives, but I think those should be easy to spot, leaving the signal to noise ratio quite manageable. Could be a nice augment perhaps to PSS/ software-driven verification. [post_title] => Finding Large Coverage Holes. Innovation in Verification [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => finding-large-coverage-holes-innovation-in-verification [to_ping] => [pinged] => [post_modified] => 2021-02-14 14:59:20 [post_modified_gmt] => 2021-02-14 22:59:20 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=295710 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [2] => WP_Post Object ( [ID] => 295999 [post_author] => 3 [post_date] => 2021-02-23 10:00:31 [post_date_gmt] => 2021-02-23 18:00:31 [post_content] => Most large electronics companies take a divide and conquer approach to projects, with clear division lines set between HW and SW engineers, so quite often the separate teams have distinct methodologies and ways to design, document, communicate and save a BoM. This division can lead to errors in the system development process, so what is a better approach? To learn more, I attended a virtual event from Perforce, their Embedded Devops Summit 2021, which I blogged about last month. They had three concurrent tracks: Plan, Create, Verify. I chose the Create track, and listened to the presentation, Implementing a Unified HW/SW BoM to Reduce System Development. Vishal Moondhra was the presenter, and his company Methodics was acquired by Perforce in 2020. IP is a term used for both HW and SW teams, and it's the abstraction of data files that define an implementation, plus all of the meta-data that defines its state.   Perforce - IP, BoM

BoM

A SW IP example would be a USB device driver, and a HW IP example a SRAM block. The Bill of Materials (BoM) shows the versioned hierarchy of all IP used to define a system, both HW and SW. BoM, Perforce The SW blocks are shown in Green, along with their version numbers, while IP2 and IP1 are HW blocks with their own version numbers and hierarchy. If you examine the hierarchy carefully there are two instances of IP13, one at version 8, and the other at version 9, so a version conflict has occurred and your BoM system needs to identify this so that consistency can be restored. Your SW team may be using Git, while the HW team prefers to use Perforce, and a unified BoM allows this mix and match approach. Meta-data is the dependencies, file permissions, design hierarchy, instance properties and usage for each IP, and the Perforce approach is that a single system is used for both traceability and reuse. Once again, any Data Management (DM) system can be used. Meta data, Perforce Being able to trace which SW driver applies to a specific HW block is fundamental to maintaining consistency during system design, and a unified BoM takes care of this compatibility requirement. Tracking patches and updates across HW and SW ensures that no mismatches creep into the system during design. The Platform BoM knows all of the versions being used in both HW and SW BoMs, and it's fully traceable so that you always know which SW component was delivered with each HW component. Platform BoM If a SW driver is incompatible with a particular HW block, then you can quickly identify that occurrence with a unified Platform BoM. If your Platform was only a handful of HW and SW blocks, then a simple Excel spreadsheet would suffice to track dependencies, but modern SoC systems have thousands of HW IP blocks, and millions of lines of code, so having a unified BoM system with traceability is the better choice. Sending out SW patches to your released Platform demands that proper testing has been validated, so keeping track of dependencies is paramount for success. With IPLM a SW team can use the concept of private resources where all of the details are abstracted out, leaving behind instead just the results of a build process. It still provides consistency, traceability and dependencies. Here's an example of using a private resource for an ARM SW stack: Private Resources Working as a team with a unified BoM breaks down the old silo approach that separated HW and SW designers from each other. Design metadata can be managed to ensure traceability, promote transparency across engineering teams, enable IP to be reused, all while separate DM systems continue to be used.

Summary

The Methodics IPLM implements this unified BoM approach, so that your engineering teams can focus on completing their system work, while knowing that their HW and SW IP is fully traceable with centralized management, and that their IP releases are not introducing bugs. Methodics IPLM min To watch the 25 minutes archived presentation online, visit here.

Related Blogs

[post_title] => Single HW/SW Bill of Material (BoM) Benefits System Development [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => single-hw-sw-bom-benefits-system-development [to_ping] => [pinged] => [post_modified] => 2021-02-22 13:50:50 [post_modified_gmt] => 2021-02-22 21:50:50 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=295999 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [3] => WP_Post Object ( [ID] => 295989 [post_author] => 13 [post_date] => 2021-02-23 06:00:10 [post_date_gmt] => 2021-02-23 14:00:10 [post_content] => System designers who are switching to a new FPGA platform have a lot to think about. Naturally a change like this is usually done for good reasons, but there are always considerations regarding device configurations, interfaces and the tool chain to deal with. To help users who have decided to switch to their FPGA technology, Achronix offers an application note, titled “Migrating to Achronix FPGA Technology”, that explains the differences that may be encountered. As the application note states, Achronix FPGA technology will be familiar to anyone using another platform, but there will be some differences that will be useful to understand. From my reading, what is interesting is how the application note offers information that could help someone who had not yet decided and was looking to see how the Achronix FPGA technology compares to other solutions. Indeed, the first section of the app note is useful for understanding which Achronix devices are good candidates as substitutions for the range of Intel and Xilinx devices. Kintex Ultrascale, Kintex UltraSCale+, Virtex Ultrascale, Virtex Ultrascale+ along with Aria 10 and Stratix 10 devices are listed along with suitable Achronix offerings ranging from the ac7t750 up to the ac7t3000. Of course, there are many caveats, such as included memory or DSP blocks, etc. Achronix hints early on in the app note about unique capabilities for AI/ML and network-on-chip (NoC) that their Speedster7t family offers that have no analog in the devices from Intel or Xilinx. Achronix includes a cross reference of core silicon components including lookup tables, logic arrays, distributed math functions, block memory, logic memory DSP and PLLs. Because many of the core components are similar few, if any, RTL modifications are required during porting. Noticeable differences appear in the interface subsystems available on various FPGA technologies. Achronix has placed a priority on including hard interface subsystems within the I/O ring. This eliminates the need for soft IP interfaces that use up valuable FPGA fabric. This also makes interface integration and timing closure easier. Achronix Speedster7t offers higher performance in most interface categories, including up to 4 x 400G Ethernet, Gen5 x 16 PCIe, DDR4 with 72-bits at 3.2G bps/pin in hard IP. Their SerDes supports up to 112Gbps. Lastly, they offer a unique and highly effective NoC. Aside from physical specifications, a user contemplating migrating to Achronix will want to understand the supported tool flow. Unlike many other FPGA vendors, Achronix has opted to use Synopsys Synplify Pro in conjunction with their standalone ACE place and route tool. Synplify is recognized as an industry leader already and it is used by many users in place of the vendor supplied options. Achronix users benefit from a mature tool flow that includes practically every feature found in any other flow. The app note includes a feature by feature comparison table that bears this out. [caption id="attachment_295990" align="aligncenter" width="753"]FPGA Migration Achronix Tool Flow FPGA Migration Achronix Tool Flow[/caption] So what code changes are required typically when moving to the Achronix tool flow? The Achronix answer to this question in the app note is that few if any RTL changes should be needed. Synplify Pro will automatically handle inferred RLB features such as LUTs and DFFs. The same goes for memories and DSPs so long as their regular inferencing templates are used. RLBs have a dedicated ALU that Synplify will use for generating efficient math and counter operations. Achronix Speedster7t supports a rich combination of DSP, Block memories and shift registers. Wrappers are not needed for primitives such as I/O ports and global buffers. I/Os and buffers are managed by using constraints applied in the I/O designer tool flow. The app note has extensive sections on memory and DSP instantiation. It also goes into detail on the topic of constraints. It is worth reading these sections in their entirety. Suffice to say that in most cases they are handled in a straightforward way that should make any porting related work fairly easy. The end of the app note talks about two distinguishing features of the Achronix Speedster7t family, network-on-chip (NoC) support and their machine learning processor (MLP). The NoC relieves the designer of managing and coding for high speed data transfers between the FPGA fabric and/or I/Os without restriction. For instance, the NoC can even populate a GDDR6 or DDR4 memory from the PCIe subsystem without consuming any FPGA fabric resources and with no need to worry about timing closure. The app note includes a reference to the Achronix documentation for the Speedster7t Network on Chip User Guide. The MLP is a powerful math block available on Speedster7t chips for use in AI/ML applications. Each MLP can have up to 32 multipliers, ranging from 3-bit integer to 24-bit floating point, supported natively in silicon. It is extremely useful for vector and matrix math. It offers integrated memories to optimize neural net operations. They cite an example of a Speedster7t device processing up to 8600 images per second on Resnet-50. The most interesting aspect of the Speedster7t family is that if users wish they can move their design to the Speedcore embedded FPGA fabric to incorporate it into their own SoC. Speedster7t is very competitive as a standalone FPGA device but as a Speedcore eFPGA integrated directly into an SoC, Achronix FPGA technology presents entirely new opportunities. As I said at the outset, not only is the app note useful for guidance on migration to Speedster7t, it also shines a light on the competitive differences between Speedster7t and other FPGA technologies. The app note is available on the Achronix website.         [post_title] => Achronix Demystifies FPGA Technology Migration [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => achronix-demystifies-fpga-technology-migration [to_ping] => [pinged] => [post_modified] => 2021-02-23 10:00:17 [post_modified_gmt] => 2021-02-23 18:00:17 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=295989 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [4] => WP_Post Object ( [ID] => 295977 [post_author] => 11830 [post_date] => 2021-02-22 10:00:01 [post_date_gmt] => 2021-02-22 18:00:01 [post_content] =>

Silicon Catalyst and mmTron are Helping to Make 5G a Reality

Everyone is talking about 5G these days. The buildout is beginning. The newest iPhone supports the new 3GPP standard. Excitement is building. But there is a back story to all this. Silicon Catalyst recently added a new company called mmTron to their incubator program. These folks are millimeter wave experts and that turns out to be quite relevant for 5G. I had a chance to catch up with mmTron to explore this new addition to the Silicon Catalyst Incubator. What I discovered was there is a critical portion of the 5G buildout that has some serious challenges. Challenges that mmTron is uniquely positioned to solve. Read on to learn about the 5G back story and how mmTron’s innovative products will contribute to delivering on the promises of  5G. Silicon Catalyst and mmTron are helping to make mmWave 5G a reality.

The Team

[caption id="attachment_296004" align="alignright" width="104"]Dr. Seyed Tabatabaei Dr. Seyed Tabatabaei[/caption]

First, a bit about the two folks I spoke with. Dr. Seyed Tabatabaei founded mmTron in 2020. He has substantial expertise in millimeter wave technology having led design efforts at MaCom, Agilent, Endwave and Teramics before founding mmTron. Seyed has assembled a team with exceptional skills in this specialized and critical area, drawing on experience from satellite and defense applications.

Glen Riley has recently joined mmTron as an advisor. Glen has a storied career in semiconductors that includes TI, AT&T and Qorvo. Glen has held several senior executive positions in general management, marketing, and sales. Glen currently is a board member and advisor for companies in the RF and optical markets. He previously knew Seyed as a customer and recently Silicon Catalyst put Glen back in touch with Seyed to become a key executive advisor.

[caption id="attachment_296005" align="alignright" width="104"]Glen Riley Glen Riley[/caption]

The 5G Design Challenge

It turns out much of the 5G build out occurring today is based on sub-6GHz spectrum implementations which are similar to the currently deployed 4G network. The substantial benefits of 5G (e.g., very high bandwidth and very low latency) will be delivered in the millimeter wave spectrum (i.e., 24GHz to 80GHz). Verizon is deploying some of this technology today and the new iPhone 12 can support that technology. These efforts are just the beginning of the process and there is still much to do before the full benefits of 5G are realized.

At these frequencies the speed delivered to your handheld device will be equal to or greater than today’s broadband residential connections. This is where the challenges of transmission for 5G exist. You’ve probably heard about the need for sophisticated antenna systems that support beamforming to make all this work.

Beyond antenna systems, there is also a big challenge to deliver electronics for high bandwidth and high-power transmission systems at reasonable cost. Most millimeter wave electronics available today are based on military and satellite applications, where commercial cost pressures aren’t as severe. This is the area where mmTron delivers significant value over and above what is currently available from the existing RF / mmWave suppliers.

The mmTron Solution

Thanks to its proven, patented architecture, mmTron technology can support 5G millimeter wave applications requiring higher power and higher linearity higher power and higher linearity better than other solutions. These key differentiating features mean fewer base stations and smaller phased array antenna systems are needed to deliver the same or greater capability. mmTron’s high linearity products complement existing lower power silicon-based beamformer chips on the market. mmTron estimates that 5G infrastructure costs can be reduced by 40 percent or more using its technology and that is big news.

mmTron’s outsourced fab and assembly/test ecosystem is already in place. RF silicon-on-insulator, gallium arsenide and gallium nitride technologies are used to deliver mmTron’s products. When compared to other large companies that support this market, mmTron represents a disruptive force in the industry as shown in the figure below.

[caption id="attachment_296008" align="aligncenter" width="2684"]Competitive Landscape Competitive Landscape[/caption]

mmTron is currently in discussions with several very large infrastructure manufacturers. The company will soon close a funding round and tape out its first family of products for first delivery in late 2021. The addition of mmTron to the Silicon Catalyst incubator illustrates the breadth of the program from a technology and market perspective.

You can learn more about mmTron and its new and disruptive technology here. Whether you’re interested in learning more about their product offerings or contributing to the company’s growth, you can inquire here.  It looks like an exciting adventure as Silicon Catalyst and mmTron are helping to make 5G a reality.

[post_title] => Silicon Catalyst and mmTron are Helping to Make mmWave 5G a Reality [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => silicon-catalyst-and-mmtron-are-helping-to-make-mmwave-5g-a-reality [to_ping] => [pinged] => [post_modified] => 2021-02-17 13:34:52 [post_modified_gmt] => 2021-02-17 21:34:52 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=295977 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 2 [filter] => raw ) [5] => WP_Post Object ( [ID] => 296139 [post_author] => 28 [post_date] => 2021-02-22 06:00:40 [post_date_gmt] => 2021-02-22 14:00:40 [post_content] => Chuck Schumer Globalfoundries Chips GF has played some groundbreaking roles in the semiconductor ecosystem. The spinout of the AMD fabs and the acquisition of the IBM semiconductor division just to name two. Another big one would be the GF Initial Public Offering which may come as early as 2022. When the IPO was first mentioned during a chat with GF CEO Tom Caulfield I had my doubts. Today however it looks like a perfect storm for a GF IPO with the ongoing semiconductor supply chain issues and the resulting automotive wafer shortages. There is a renewed push for more US based semiconductor manufacturing and other countries are considering the same. With the help of some serious political muscle GF established a semiconductor manufacturing beach head in Upstate NY (Fab 8) in 2009 and additional land rights have already been secured for future expansion. Another strong sign of GF U.S. based semiconductor manufacturing prowess is the recent announcement with the U.S. Department of Defense: U.S. Department of Defense Partners with GLOBALFOUNDRIES to Manufacture Secure Chips at Fab 8 in Upstate New York To make a long story short, the IBM Semiconductor group acquired by GF was a longstanding trusted contract chip manufacturer to the U.S Government through the Fishkill fab (IBM building 323). That relationship was maintained by GF and is now being expanded/transferred to Fab 8 in Malta. Fishkill Fab 10 was sold to ON Semiconductor so this transfer is an important step for GF. The first chips under this agreement will arrive in 2023 and will be based on a 45nm SOI process. Here are the related quotes: “GLOBALFOUNDRIES is a critical part of a domestic semiconductor manufacturing industry that is a requirement for our national security and economic competitiveness,” said Senate Majority Leader Chuck Schumer, who successfully passed new federal semiconductor manufacturing incentives in last year’s National Defense Authorization Act (NDAA). “I have long advocated for GLOBALFOUNDRIES as a key supplier of chips to our military and intelligence community, including pressing the new Secretary of Defense, Lloyd Austin, to further expand the Department of Defense’s business with GLOBALFOUNDRIES, which will help expand their manufacturing operations and create even more jobs in Malta.” In a supporting statement from the U.S. Department of Defense, “This agreement with GLOBALFOUNDRIES is just one step the Department of Defense is taking to ensure the U.S. sustains the microelectronics manufacturing capability necessary for national and economic security. This is a pre-cursor to major efforts contemplated by the recently passed CHIPS for America Act, championed by Senator Charles Schumer, which will allow for the sustainment and on-shoring of U.S. microelectronics capability.” "GLOBALFOUNDRIES thanks Senator Schumer for his leadership, his ongoing support of our industry, and his forward-looking perspective on the semiconductor supply chain,” said Tom Caulfield, CEO of GF. "We are proud to strengthen our longstanding partnership with the U.S. government, and extend this collaboration to produce a new supply of these important chips at our most advanced facility, Fab 8, in upstate New York. We are taking action and doing our part to ensure America has the manufacturing capability it needs, to meet the growing demand for U.S. made, advanced semiconductor chips for the nation’s most sensitive defense and aerospace applications." Given his current political clout, having the Senate Majority Leader as a champion is a tremendous asset for GF. And let's not forget GF Fab 1 in Dresden. I was there when Angela Merkel toured the facility in 2015 and thought for sure with there would be serious Government investment to strengthen the EU semiconductor supply chain. How times have changed. As I said, a perfect storm for GLOBALFOUNDRIES, absolutely. [post_title] => A Perfect Storm for GLOBALFOUNDRIES [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => the-globalfoundries-impending-ipo [to_ping] => [pinged] => [post_modified] => 2021-02-22 06:41:42 [post_modified_gmt] => 2021-02-22 14:41:42 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=296139 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 3 [filter] => raw ) [6] => WP_Post Object ( [ID] => 296121 [post_author] => 19385 [post_date] => 2021-02-21 10:00:25 [post_date_gmt] => 2021-02-21 18:00:25 [post_content] => I recently posted an article [1] published in 2013 on the cost of 3D NAND Flash by Dr. Andrew Walker, which has since received over 10,000 views on LinkedIn. The highlight was the plot of cost vs. the number of layers showing a minimum cost for some layer number, dependent on the etch sidewall angle. In this article, the same underlying principles are used to calculate the effective 2D design rule for the 3D NAND array as well as to find the maximum density, both of which are strongly dependent on the sidewall angle of the holes etched through the multilayer stack. A previous article of mine focused on initial estimates of 2D vs. 3D wafer cost [2], but here we will go directly to the impact of 3D processing on the effective 2D density. Model of 3D NAND cell The 3D NAND cell has a typical arrangement as shown in Figure 1. The charge storage areas are circular rings containing at least a nitride layer sandwiched between two oxide layers. The rings encircle a silicon channel, typically also ring-shaped. The circular hole structures are taken to be located on a hexagonal close-packed lattice. If we take the minimum distance between holes to be equal to 1/4 the hole diameter [3], the density will be (2/1.25)^2/sqrt(3) ~ 1.478 times that of the case where the same diameter holes are placed on a square lattice with the same minimum distance between holes. This proportionality will help in determining the equivalent 2D design rule later, i.e., the design rule of the 2D planar NAND array with the same density (assuming one bit per cell). 3D NAND Flash unit cell Figure 1. 3D NAND Flash unit cell. 3D NAND Hole Widening The holes penetrating the layers of the 3D NAND stack are ideally with vertical sidewalls. Realistically, it deviates by a fraction of a degree from normal [4]. As a result, the bottom diameter of the hole will be smaller than the top diameter. It is the top diameter that therefore determines the cell pitch. The widening of the hole diameter from the bottom to the top can therefore be given by: Top diameter - bottom diameter = cot(sidewall angle) * # layers * layer height. The top diameter is used to determine the equivalent 2D design rule (E2DDR): 1.25^2 * sqrt(3) * (top diameter)^2 = # layers * 4 * (E2DDR)^2, or E2DDR ~ 0.82 * top diameter/sqrt(# layers) This allows us to predict a maximum density or minimum 2D equivalent design rule for some number of layers, at a given sidewall angle. We can still expect the equivalent 2D design rule to reach 10 nm. Widening of diameter as stack height Widening of diameter as stack height 2 Figure 2. Top: Widening of diameter as stack height increases with number of layers. Bottom: Equivalent 2D design rule vs. number of cell layers, for different bottom diameters, at a sidewall taper angle of 89.7 deg. (visually estimated for Samsung's 92-layer case from IWAPS 2019 presentation by J. Choe [4]). Note that the maximum density or minimum equivalent design rule occurs for a smaller number of layers for a smaller diameter. This means taller holes would eventually need to be built up from stacking multilayers supporting shorter holes, with alignment required. It is a vertical analogy to the Litho-Etch-Litho-Etch... multipatterning used by foundries [5]. This is already a common practice among 3D NAND manufacturers [4], with only Samsung holding out so far, but considering it for seventh-generation V-NAND [6]. References [1] A. J. Walker, IEEE Trans. Semicon. Mfg. 26, 619 (2013). [2] F. Chen, Toshiba's Cost Model for 3D NAND: https://www.linkedin.com/pulse/toshibas-cost-model-3d-nand-frederick-chen, also https://semiwiki.com/semiconductor-manufacturers/291971-toshiba-cost-model-for-3d-nand/ [3] A. Tilson and M. Strauss, Intl. Symp. Phys. & Failure Analysis Integ. Circ., 2018. [4] Some figures for measurement are provided for example in J. Choe's IWAPS 2019 presentation "Technology Views on 3D NAND Flash: Current and Future." http://www.chipmanufacturing.org/1-A2-Short%20version%20for%20Publish_IWAPS%202019_Jeongdong%20Choe_TechInsights_3D%20NAND_F_s.pdf [5] J. Huckabay et al., Proc. SPIE 6349, 634910 (2006). [6] https://en.yna.co.kr/view/AEN20201201006900320 [post_title] => Calculating the Maximum Density and Equivalent 2D Design Rule of 3D NAND Flash [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => calculating-the-maximum-density-and-equivalent-2d-design-rule-of-3d-nand-flash [to_ping] => [pinged] => [post_modified] => 2021-02-22 06:41:03 [post_modified_gmt] => 2021-02-22 14:41:03 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=296121 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [7] => WP_Post Object ( [ID] => 296148 [post_author] => 19 [post_date] => 2021-02-21 08:00:39 [post_date_gmt] => 2021-02-21 16:00:39 [post_content] => Elon Knows When You Crash
It’s true. Elon Musk, CEO of Tesla Motors, knows when you crash your Tesla. He just isn’t obliged, in the U.S., to do anything about it. And he’s not alone. Here it is, 2021 and buyers of cars in the U.S. can’t count on getting automatic crash notification (ACN) included in their next new car.  Even those cars equipped with ACN require a subscription for it to work in most cases. When European regulators mandated eCall in all new cars years ago, those of us on this side of the Atlantic chuckled at their feeble attempt to “catch up” with the U.S., where OnStar had been launched by GM 20 years before. While the EU was working on eCall, the U.S. was tinkering with “next gen 9-1-1.” Now, here we are in 2021, and emergency crash notification – an automatic call for help from a car in the event of an airbag deployment, or a button-push request for assistance – is still neither a standard feature on cars sold in the U.S. nor a mandated piece of automotive kit.  If you crash your car in the U.S., you’re pretty much on your own if you haven’t paid for the built-in telematics service. Tesla is a special case, though.  By now we all know that Musk is collecting buckets of vehicle data throughout the operational life of a typical Tesla via its built-in wireless connection.  We also know that Musk has used that data forensically to get himself and his company “off the hook” in the event of multiple spectacular and fatal Tesla crashes. Time and again Musk has used vehicle data to demonstrate how drivers have misused Tesla vehicles, violating various warnings and caveats, leading to fatal encounters with other vehicles and inanimate objects.  We’ve all seen multiple Tesla RUD (rapid unplanned disassembly) pictures and videos.  What is missing from all of these events is the timely arrival of assistance in the form of police, fire department, or ambulance personnel – beckoned by a built-in, on-board 911 call – a la OnStar or some equivalent. This puts Musk in a special category. He is using the wireless connection and the data collected thereby against the misbehaving vehicle owner rather than putting connectivity to work to provide assistance in urgent circumstances. For the rest of the industry, the failure of auto makers to provide a free, built-in emergency call capability in all cars sold in North America – including General Motors vehicles – is a sad commentary on the industry.  But Tesla’s failure to provide a built-in emergency call function stands out. In a recent Twitter exchange between Musk and a Tesla owner – who was unable to summon assistance using his phone and also was unable to access the vehicle’s wireless connection to seek help – the Tesla Motors CEO tweeted “Absolutely” to the suggestion that Tesla ought to enable emergency calling from its vehicles. So, Musk likes the idea. Musk already offers this solution on vehicles sold in continental Europe and Russia. Tesla owners in the U.S. wait. Musk's Twitter exchange with Tesla owner: https://cleantechnica.com/2021/01/03/tesla-vehicles-could-be-able-to-call-911-during-an-emergency/ It was 25 years ago that GM first began the process of introducing emergency call modules developed as part of Project Beacon in Cadillac vehicles – beginning the journey to the introduction of what we now know as OnStar.  At that time GM Executive Chairman Harry Pearce asked the perplexing questions (from OnStar President Chet Huber’s “Detour”): “If one hundred cars crash and they don’t have something like OnStar on board, how many of them will call for help?” “Now, how many out of a hundred OnStar-equipped cars that crash will need to call for help before we’d be more wrong for holding back a potentially lifesaving technology like this than we would be for putting it in?” The rest is history, as they say.  OnStar was born, but it was another 10 years before it was built into every GM vehicle.  And today, the automatic crash notification feature from GM is still not free.  A friend of mine is fond of saying that making customers pay for automatic crash notification is like a hotel charging you for the fire extinguisher (or sprinkler fire suppression system) in your room. Musk should correct this embarrassing omission in Tesla vehicles.  If Tesla can deliver cars with eCall in Europe and Russia, the company can deliver an equivalent solution in the U.S. The same goes for the rest of the automotive industry.  Car makers shouldn’t be de-contenting vehicles of vital safety systems for the U.S. market and up-contenting for Europe and Russia.  Automatic crash notification in passenger vehicles ought to be regarded as standard equipment - a human right maybe? Automatic crash notification is only a start.  There is further work needed on leveraging vehicle data in the event of a crash to determine crash severity, the condition and number of vehicle occupants, and the accurate location of the vehicle.  It’s not too late for Tesla to show the way forward.  Sad to say, in 2021, automatic crash notification is not a solved problem in the U.S.
[post_title] => Elon Knows When You Crash [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => elon-knows-when-you-crash [to_ping] => [pinged] => [post_modified] => 2021-02-22 06:40:14 [post_modified_gmt] => 2021-02-22 14:40:14 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=296148 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 1 [filter] => raw ) [8] => WP_Post Object ( [ID] => 295546 [post_author] => 30874 [post_date] => 2021-02-21 06:00:11 [post_date_gmt] => 2021-02-21 14:00:11 [post_content] => As many of you know Bitcoin prices have surged recently up to $40,000 USD per bitcoin as of February 2021. We are in the middle of a bit rush! People are noticing Bitcoin's surge and wondering how they can profit from it. In this article we will explore how custom silicon is a vital part of a winning bitcoin mining strategy. Some people wonder what it would take to make their own Bitcoin mining custom silicon in order to beat everyone else. My quick survey of the field indicates Bitmain Antminer S19 Pro is the state of the art bitcoin mining equipment as of February 2021. Just as many others such as Amazon, Apple, Facebook, Tesla, Google and more have realized that there is a clear competitive advantage to their businesses from custom silicon; Bitmain too decided to make their own custom silicon. Using the latest silicon process node increases the power efficiency and the processing power of the bitcoin miner system. This is why bitcoin miner systems manufacturers continue to update their custom silicon mining chips. Some things to consider to plan your next custom silicon mining chip:
  • Selecting a chip supplier to design your custom silicon (i.e. ASIC).

Finding a good chip supplier to design your ASIC is an art in and of itself. You want a reputable company with an excellent design team, but they also want a reputable system company as a customer.  So if this is your first project making a bitcoin miner you will need to convince the chip supplier (among others) that you're a serious customer. There are many chip design houses in the world, but many of them are not probably who you'd want to hire if you want to reduce your technical and schedule risks. A way to mitigate your risk of selecting the wrong supplier and also to present your RFQ professionally is by hiring a silicon manager to assist you in those interactions. As part of CustomSilicon.com's process we work through the Concept and Requirements phase with the chip supplier candidates, and end up selecting one candidate after the Si proposal review. I'd go for 4 chip supplier candidates at Concept, reduce that to 2 suppliers at Concept phase sign-off and then at Requirements phase sign off downselect to 1 chip supplier. You want to buy RTL IP that is ready for use or hire a chip supplier that has it from past projects. There are some companies out there with previous experience designing custom ASICs for bitcoin mining. But you always need to thoroughly vet them before moving forward writing checks for NRE and masks.
  • Project cost.

There are some costs that are more predictable than others. A disclaimer: all prices below are my gut feeling/what I read/hear from others, from my experience, etc... But as you should know many prices are negotiable and these are influenced by your relationships, total volumes, cost of opportunity of the supplier, negotiating skills, etc...Here are some:
    1. Masks: To make a chip at the foundry you need to buy masks. ASICs today for bitcoin mining are in the 7 nm node already. So if you want to leapfrog the competition you need to shoot for 5 nm or 3 nm. 3nm is the highest risk since this process node is in development.  In my opinion masks for a 5 nm or 3 nm process will be in the 10 to 14 million USD range, let's call it 12 million USD for easy math. A project like this will probably take two full mask sets on a good case scenario. Selecting a good supplier, performing detailed reviews, using state of the art EDA tools, getting direct foundry support, and hiring a silicon manager are all good ways to mitigate the risk of needing to tape out more than two times.
    2. NRE: This is the cost to pay for the chip design house. This is really subjective and speculative before having gone through Concept and Requirements phases since it will depend on how closely the RTL they have matches the requirements the system company wants and what trade offs you negotiate. It also depends on foundry rule deck accuracy, and simply put what other things that chip design house could work on since that is an opportunity cost for them. In my opinion this will land in the 2 to 5 million USD range. But this can really vary depending on negotiations and everything mentioned here.
      • Firmware: Here it's important to decide who will write the firmware for the chip. This can actually be a significant cost comparable to chip designer time cost and sometimes exceeding it.
      • Assembly and test: It's possible that the chip design house is not going to provide a full solution. In that case you need to go work with an outsourced assembly and test (OSAT) house.
      • Ideally you don't need to go through one of the value chain aggregators (VCAs) to work with the wafer foundry, but that could happen. I think working direct with the foundry will help your project go faster and reduce errors, but the foundries don't want to work directly with startups (i.e. companies with small volume). So the point made further below in this article about buying foundry wafers is key to gain direct foundry access.
    3. Summary of fixed costs: 24 Million USD (assuming two full mask sets) + 2 to 5 Million USD (NRE) = 26 to 29 Million USD
  • Project schedule.

    1. My gut feeling is that getting from Concept phase start to Requirements phase sign off is probably a 3 months endeavor.
    2. Time from spec freeze to tape out is probably in the 6 months range. This could be longer or shorter depending on how close the starting RTL is to what the system company wants from its miner.  It's important to highlight that specification freeze requires the system company to develop concise, precise and complete requirements documentation during the Concept and Requirements phases. This is important so that the chip supplier can provide a draft specification quickly after Requirements phase sign-off and we can close on the specification of the chip with Specification phase sign-off. Constant spec changes during development are a project schedule and cost nightmare that can be avoided with disciplined process and early attention to detail.
    3. Time from tape out to tested samples is probably 5 months.
    4. Time to first samples: So time to first ASIC samples is equal to 1+2+3. Which is 14 months for your late Proto or first EVT build.
    5. You will likely need to spin the silicon one more time to get to final shippable silicon. This likely means 2 months of Validation time , 2 months to get ready for the tape out and 5 months of fabrication time. You will have to be thorough at chip and system level validation to find all bugs. Then later check that the ECOs are properly root caused and verified before tape out.  So your final silicon samples (i.e. not production quantities) come in at 23 months for your DVT build.
    6. Mass production risk ramp is another area where you will need to make a judgement call. This will be about how much money you want to risk without knowing that the final silicon is good yet (i.e. you haven't completed your DVT build).  You can decide to pull in the bitcoin miner system's mass production ramp date by risk releasing wafers before building DVT phase. To do that you need to go through all your validation status and make a risk assessment in preparation for Mass production phase sign-off leading to your PVT (i.e. final) build. It takes about 5 months to get mass production parts in volume. So if you waited to build systems with final silicon samples until 23 months and then signed off on ramping the wafers at DVT build exit sign off, it will take another 5 months (plus some assembly packaging and test time) to get those mass production chips in quantity to your factory. Risk released wafers could end up being scrapped if you find bugs at DVT that are unacceptable with your final silicon. So this needs to be done with care as it can cost you millions of dollars in scrapped wafer material. Sometimes bugs can be fixed with one time programmable memory (OTP) at final test which would save you from having to scrap the wafers. So you will want to plan to lock OTP settings sometime before you would need to run chips through final test for your mass production ramp.
  • Buying wafers from the foundry.

As you may have heard there is a shortage of silicon wafer foundry capacity. So you will need to make a compelling case to the foundry why they should work with you in their 5 nm or 3 nm process nodes. As you know money is a great facilitator. So it may be that you need to commit to buy wafers ahead of time with the foundry. If you commit to buy a lot of wafers ahead of time they need to provide you with direct support, preferential fabrication times (super hot lots, hot lots), etc... Let's say you plan on building 100,000 bitcoin miner systems. Each system containing 200 chips/ASICs inside. So that is 20,000,000 chips. In 300 mm wafers that is probably something like 5,000 wafers. The number of chips per wafer depend on your final die size, your fab yield, your package yield and your test yield. So here I assumed you get 4,000 good chips/dies per wafer. Of course these numbers could be different for your system, but I will assume these to illustrate what I think is the likely ball park. During the process phases all of these details are nailed down and adjusted as needed. The question then is what is the minimum amount of wafers the foundry would ask you to commit to buy upfront to get the kind of support and preferential access you need to get your bitcoin miners built faster. I am going to guess a 3 nm wafer may end up costing $20,000 USD. So you see that if you end up buying 5,000 wafers that is a $100,000,000 USD purchase! Maybe you can commit to buying 10% of that upfront and get a direct deal with the foundry, maybe not, you need to negotiate.

There are also system level miner considerations.

These are outside of the scope of this article since they are not directly custom silicon items. But it's worth briefly mentioning them since custom silicon is developed to directly support a custom system hardware project; in this case a bitcoin miner.  Here are some things that will need to be planned:
  • Hiring a CM.

You need to build systems in mass production somewhere. This supplier needs to be able to source all components, assemble, test them and package them to be shipped for you. A lot of companies choose CMs in Asia (China, Taiwan, etc...). These guys will also develop some or all of your factory test infrastructure. You need to pick wisely. The same CM company has very different levels of quality and experienced personnel for different customers. If you're a new or small customer you may not get a good team, so you need to shop around for the right CM partner.
  • Pre-silicon deliverables.

    1. FPGA board. Your firmware team needs a platform to start developing code on in preparation for the first build.
    2. Blank packaged chips and mock form factor accurate PCB boards. Your mechanical engineers may need these so they can build mechanical only prototypes to mock-up the cooling solution well ahead of your first system build at the CM.
  • Designing the hard system level stuff.

    1. Firmware. You'll need to write the firmware to control the PCBs with all the ASICs on them. So you need some firmware engineers with experience writing firmware for bitcoin miners.
    2. Cooling. You'll need to cool down your miners. These miners consume thousands of watts each. This means you'll need to design a customized heatsink system. Some people use fans, others immersion cooling, etc... Whatever you do this is a critical part of the project and you need to hire good mechanical engineers with experience in this type of design.
    3. PCB boards. You'll need to design efficient power supplies. There is no point in making a super power efficient custom silicon chip and then you waste lots of power in the power converter plugged to the wall feeding your chip. You'll also need to design good PCBs with thick copper so that your board losses are not too high. This all means that you need to hire a good quality electrical engineer to design this for you.

In summary.

Likely time to first samples 14 months from kick off. Likely time to final silicon samples 23 months from kick off. Summary of fixed costs 26 to 29 million USD. This is for the chip only. There will be some additional costs to develop the bitcoin miner system as discussed in the system level miner considerations section. The estimates above assume the project is run like a tight ship. This can be hard to do generally, and especially when there are a lot of people and companies working together for the first time. Without experienced people and a good process to follow the chances to execute in these timelines are greatly diminished. Managing cross functional, multi national and multi company teams is vital for this engagement. As you can see this project is doable. It's also a big investment with big risks that need to be mitigated. So the question is: what will be the price of bitcoin by the time you have your miners ready?   For more information contact us. Disclaimers: All prices, schedules and details in this article are my best guesses, my opinions, and what I gather from multiple sources of information. I provide this for illustration and informational purposes only. Use at your own risk. As a project progresses through the phase sign offs all these details are committed/verified with suppliers. [post_title] => How do you plan the best Bitcoin miner in the world? [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => how-do-you-plan-the-best-bitcoin-miner-in-the-world-custom-silicon-is-the-answer [to_ping] => [pinged] => [post_modified] => 2021-02-22 06:38:30 [post_modified_gmt] => 2021-02-22 14:38:30 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=295546 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [9] => WP_Post Object ( [ID] => 295691 [post_author] => 28 [post_date] => 2021-02-19 10:00:22 [post_date_gmt] => 2021-02-19 18:00:22 [post_content] => Dan and Mike are joined by Mahesh Tirupattur, executive vice president at Analog Bits. Mahesh discussed how he found his way to analog IP design and his long association with Analog Bits. Effective strategies for analog IP design and deployment are discussed as well as leading edge applications for analog IP . Mahesh also provides the back story on those Analog Bits gift bottles of wine that are seen each year around the holidays. The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual. [post_title] => Podcast EP8: A Look Inside Analog IP and Analog Bits [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => closed [post_password] => [post_name] => podcast-ep8-a-look-inside-analog-ip-and-analog-bits [to_ping] => [pinged] => [post_modified] => 2021-02-23 11:11:51 [post_modified_gmt] => 2021-02-23 19:11:51 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?post_type=podcast&p=295691 [menu_order] => 0 [post_type] => podcast [post_mime_type] => [comment_count] => 0 [filter] => raw ) ) [post_count] => 10 [current_post] => -1 [in_the_loop] => [post] => WP_Post Object ( [ID] => 295695 [post_author] => 11830 [post_date] => 2021-02-24 10:00:19 [post_date_gmt] => 2021-02-24 18:00:19 [post_content] =>

HCL Expands Cloud Choices with a Comprehensive Guide to Azure Deployment

HCL Compass is quite a powerful tool to accelerate project delivery and increase developer productivity. Last August I detailed a webinar about HCL Compass that will help you understand the benefits and impact of a tool like this. This technology falls into the category of DevOps, which aims to shorten the systems development life cycle and provide continuous delivery with high software quality. Scalability across the enterprise is a key factor for success here, so cloud migration is definitely a consideration. Recently, I detailed how HCL is assisting its users to get to the Amazon Elastic Compute Cloud. Vendor choice is definitely a good thing. I’m happy to report that HCL expands cloud choices with a comprehensive guide to Azure deployment.

Microsoft Azure, commonly referred to simply as Azure is a major force in cloud computing. Moving any enterprise application to the cloud provides significant benefits, including:

  • Lower costs
  • Increased agility
  • Reliable global delivery

There are some specific impacts that HCL’s migration guide cites (a link is coming). Some of these are worth repeating:

  • Cost effectiveness: VMs (virtual machines) deployed in the cloud remove the capital expense of procuring and maintaining equipment as well as the expense of maintaining an on-premises data center. These VMs can host instances of HCL Compass
  • Scalability: Estimating data center capacity requirements is very difficult. Over-estimation leads to wasted money and idle resources. Under-estimation degrades the business’s ability to be responsive. Cloud computing resources can easily and quickly be scaled up or down to meet demand. Of particular interest regarding this point, Azure provides autoscaling that automatically increases or decreases the number of VM instances as needed
  • Availability: Azure, like other cloud providers, invests in redundant infrastructure, UPS systems, environmental controls, network carriers, power sources, etc. to ensure maximum uptime. Most enterprises simply cannot afford this kind of scale

The guide from HCL provides everything you need to plan your HCL Compass deployment or migration in Azure. There’s a lot of items to consider, so having all this in one place is very useful. Here are just a few of the considerations that are addressed in the HCL guide:

Supported database platforms: Ensuring you are using the correct version of the required database software is key. Versions between on-premises and the cloud are discussed, along with recommendations on how to utilize an on-premises database for a cloud deployment. This latter discussion supports a hybrid environment.

Accessing the data: For a cloud deployment, the preferred method of data access is to utilize the HCL Compass web client. The specific browsers and versions to use are specified, along with the cautions and pitfalls of other approaches.

Requisite software: Along with Linux database versions, the required versions for installation software, Java, Windows and Linux are discussed.

Many other topics are explained in detail, including:

  • Performance and performance monitoring
  • Cross-server communication
  • Load balancing
  • SSL enablement
  • Single sign-on implementation
  • LDAP authentication
  • Multi-site implementation
  • EmailRelay considerations

A detailed discussion of migration considerations is also presented, along with sample implementation scenarios. One scenario treats HCL Compass and the database in Azure. The other treats HCL Compass in Azure with the database on-premises. All-in-all, this guide provides a complete roadmap to implement HCL Compass in Azure. I can tell you from first-hand experience that cloud migration can be challenging. Software is provisioned and managed differently in a cloud environment. As long as you understand those nuances, things go smoothly.

The migration guide provided by HCL helps you discover all those nuances. You can get your copy of this valuable guide here. Download it now and find out how HCL expands cloud choices with a comprehensive guide to Azure deployment.

[post_title] => HCL Expands Cloud Choices with a Comprehensive Guide to Azure Deployment [post_excerpt] => [post_status] => publish [comment_status] => open [ping_status] => open [post_password] => [post_name] => hcl-expands-cloud-choices-with-a-comprehensive-guide-to-azure-deployment [to_ping] => [pinged] => [post_modified] => 2021-02-19 13:11:23 [post_modified_gmt] => 2021-02-19 21:11:23 [post_content_filtered] => [post_parent] => 0 [guid] => https://semiwiki.com/?p=295695 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [comment_count] => 0 [current_comment] => -1 [found_posts] => 7513 [max_num_pages] => 752 [max_num_comment_pages] => 0 [is_single] => [is_preview] => [is_page] => [is_archive] => [is_date] => [is_year] => [is_month] => [is_day] => [is_time] => [is_author] => [is_category] => [is_tag] => [is_tax] => [is_search] => [is_feed] => [is_comment_feed] => [is_trackback] => [is_home] => 1 [is_privacy_policy] => [is_404] => [is_embed] => [is_paged] => [is_admin] => [is_attachment] => [is_singular] => [is_robots] => [is_favicon] => [is_posts_page] => [is_post_type_archive] => [query_vars_hash:WP_Query:private] => e64def1ba05c75a0d2020dce1b92d1d6 [query_vars_changed:WP_Query:private] => 1 [thumbnails_cached] => [stopwords:WP_Query:private] => [compat_fields:WP_Query:private] => Array ( [0] => query_vars_hash [1] => query_vars_changed ) [compat_methods:WP_Query:private] => Array ( [0] => init_query_flags [1] => parse_tax_query ) [tribe_is_event] => [tribe_is_multi_posttype] => [tribe_is_event_category] => [tribe_is_event_venue] => [tribe_is_event_organizer] => [tribe_is_event_query] => [tribe_is_past] => [tribe_controller] => Tribe\Events\Views\V2\Query\Event_Query_Controller Object ( [filtering_query:protected] => WP_Query Object *RECURSION* ) )

HCL Expands Cloud Choices with a Comprehensive Guide to Azure Deployment

HCL Expands Cloud Choices with a Comprehensive Guide to Azure Deployment
by Mike Gianfagna on 02-24-2021 at 10:00 am

HCL Expands Cloud Choices with a Comprehensive Guide to Azure Deployment

HCL Compass is quite a powerful tool to accelerate project delivery and increase developer productivity. Last August I detailed a webinar about HCL Compass that will help you understand the benefits and impact of a tool like this. This technology falls into the category of DevOps, which aims to shorten the systems development … Read More


Finding Large Coverage Holes. Innovation in Verification

Finding Large Coverage Holes. Innovation in Verification
by Bernard Murphy on 02-24-2021 at 6:00 am

Innovation image 2021 min

Is it possible to find and prioritize holes in coverage through AI-based analytics on coverage data? Paul Cunningham (GM, Verification at Cadence), Jim Hogan and I continue our series on research ideas. As always, feedback welcome.

The Innovation

This month’s pick is Using Machine Learning Clustering To Find Large Coverage Read More


Single HW/SW Bill of Material (BoM) Benefits System Development

Single HW/SW Bill of Material (BoM) Benefits System Development
by Daniel Payne on 02-23-2021 at 10:00 am

Perforce - IP

Most large electronics companies take a divide and conquer approach to projects, with clear division lines set between HW and SW engineers, so quite often the separate teams have distinct methodologies and ways to design, document, communicate and save a BoM. This division can lead to errors in the system development process,… Read More


Achronix Demystifies FPGA Technology Migration

Achronix Demystifies FPGA Technology Migration
by Tom Simon on 02-23-2021 at 6:00 am

FPGA Migration Achronix Tool Flow

System designers who are switching to a new FPGA platform have a lot to think about. Naturally a change like this is usually done for good reasons, but there are always considerations regarding device configurations, interfaces and the tool chain to deal with. To help users who have decided to switch to their FPGA technology, Achronix… Read More


Silicon Catalyst and mmTron are Helping to Make mmWave 5G a Reality

Silicon Catalyst and mmTron are Helping to Make mmWave 5G a Reality
by Mike Gianfagna on 02-22-2021 at 10:00 am

Silicon Catalyst and mmTron are Helping to Make 5G a Reality

Everyone is talking about 5G these days. The buildout is beginning. The newest iPhone supports the new 3GPP standard. Excitement is building. But there is a back story to all this. Silicon Catalyst recently added a new company called mmTron to their incubator program. These folks are millimeter wave experts and that turns out to… Read More


A Perfect Storm for GLOBALFOUNDRIES

A Perfect Storm for GLOBALFOUNDRIES
by Daniel Nenni on 02-22-2021 at 6:00 am

Chuck Schumer Globalfoundries Chips

GF has played some groundbreaking roles in the semiconductor ecosystem. The spinout of the AMD fabs and the acquisition of the IBM semiconductor division just to name two. Another big one would be the GF Initial Public Offering which may come as early as 2022.

When the IPO was first mentioned during a chat with GF CEO Tom Caulfield … Read More


Calculating the Maximum Density and Equivalent 2D Design Rule of 3D NAND Flash

Calculating the Maximum Density and Equivalent 2D Design Rule of 3D NAND Flash
by Fred Chen on 02-21-2021 at 10:00 am

3D NAND Flash unit cell

I recently posted an article [1] published in 2013 on the cost of 3D NAND Flash by Dr. Andrew Walker, which has since received over 10,000 views on LinkedIn. The highlight was the plot of cost vs. the number of layers showing a minimum cost for some layer number, dependent on the etch sidewall angle. In this article, the same underlying… Read More


Elon Knows When You Crash

Elon Knows When You Crash
by Roger C. Lanctot on 02-21-2021 at 8:00 am

Elon Knows When You Crash

It’s true. Elon Musk, CEO of Tesla Motors, knows when you crash your Tesla. He just isn’t obliged, in the U.S., to do anything about it. And he’s not alone.

Here it is, 2021 and buyers of cars in the U.S. can’t count on getting automatic crash notification (ACN) included in their next new car.  Even those cars equipped with ACN require

Read More

How do you plan the best Bitcoin miner in the world?

How do you plan the best Bitcoin miner in the world?
by Raul Perez on 02-21-2021 at 6:00 am

iStock 1213603569

As many of you know Bitcoin prices have surged recently up to $40,000 USD per bitcoin as of February 2021. We are in the middle of a bit rush! People are noticing Bitcoin’s surge and wondering how they can profit from it. In this article we will explore how custom silicon is a vital part of a winning bitcoin mining strategy.

Some… Read More


Podcast EP8: A Look Inside Analog IP and Analog Bits

Podcast EP8: A Look Inside Analog IP and Analog Bits
by Daniel Nenni on 02-19-2021 at 10:00 am

Dan and Mike are joined by Mahesh Tirupattur, executive vice president at Analog Bits. Mahesh discussed how he found his way to analog IP design and his long association with Analog Bits. Effective strategies for analog IP design and deployment are discussed as well as leading edge applications for analog IP . Mahesh also provides… Read More