WEDNESDAY April 17, 10:45 - 11:15 | Foyer
EVENT TYPE: POSTER SESSION
SESSION 7Poster Session
Lincoln Lee - Mentor, A Siemens Business
Michael Chiang - Mentor, A Siemens Business
Michael Chiang - Mentor, A Siemens Business
|7.1||How to Verify IP Performance|
|IP (Intellectual Property) performance is becoming more and more important, often IP performance issue cause IP architecture modification, so it is best to verify IP performance at the block level verification stage. Do not to find problems until to system on a chip (SoC) level verification stage. The question is how to verify IP performance, how to ensure that IP performance has no problem?|
|Speaker:||Deyong Yang - Unisoc Communications, Inc.
|Authors:||Deyong Yang - Unisoc Communications, Inc.
Henry Chew - MediaTek, Inc.
|7.2||Random Stability of System Verilog's Verification Platform|
|Due to the large design scale of the current IC, most System Verilog’s verification platforms today use random stimulus and random setting to cover all possible test cases as much as possible in order to achieve convergence of functional coverage. On the one hand, the random property of the verification platform tends to create a reasonable scenario that even the designer himself can't think of. On the other hand, the verification platform also needs to ensure that any scene once created can be accurately reproduced. The repeatability of certain random behavior is called as the random stability of the verification platform in this paper. The randomness of System Verilog is achieved by random number generators and random seeds. The verification platform can ensure that the scenarios are identical by using the same initial seed without any modification. However, if the design to be tested is not stable for various reasons during the entire IC project, the verification platform will need to be adjusted accordingly, and if some design bugs are found on the way and it is difficult to solve them quickly. Only by keeping same initial seed to achieve the scenario reproduction becomes difficult. In this case, high requirements are placed on the architecture of the verification platform and its internal random strategy. This paper firstly conducts an in-depth study on the random mechanism of System Verilog, and then verifies the characteristics of System Verilog hierarchical seeding and the random process of the verification platform. Finally, from the two aspects of the architecture of the verification platform and the specific code writing strategy, this paper proposes the corresponding improvement methods so that the platform can maintain the random stability in the case of constant patching.|
|Speaker:||Yao Zhan - MediaTek, Inc.
|Authors:||Yao Zhan - MediaTek, Inc.
Hongcai Cui - MediaTek, Inc.
Xiaobing Zhang - MediaTek, Inc.
|7.3||Adopting C HAL Functions in Verification to Speed up SSD Controller and Firmware Development|
|With the increasing complexity of SSD (Solid State Disk) controller, firmware is playing more and more important roles in products. This brings great challenges to verification team. Verification team need to make sure firmware and controller can work together to satisfy system requirements and help firmware team to deliver high-quality products in time. Based on UVM methodology, this article presents a solution for verification team to create and verify C HAL (hardware Abstraction Layer) functions in UVM testbench, which involve firmware team in early stage and enable firmware development in parallel. It can greatly reduce time-to-market and help to achieve product success.|
|Speaker:||Jinsong Liu - Micron Technology, Inc.
|Author:||Jinsong Liu - Micron Technology, Inc.
|7.4||Data Driven IC design and Verification Platform|
|This article addresses three practical issues in the daily digital IC design and verification work, i.e. 1) how we organize our design and verification source codes, documentations, verification plans and sessions to enable the maximum development parallelization, 2) what information should we keep track to make management more easily and more efficiency as well as how to build a robust metrics driven system which includes methodology flow and checklists, 3) how to manage multi-user, multi-task, multi-project in such one single platform to enhance team-based productivity and efficiency. In this paper, the data driven, continuous integration concept-based IC design and verification platform xHub will be introduced to illustrate the contribution to IC design and verification efficiency.|
|Speaker:||Alice Y. Gao - China Standard Software Company, LTD & Chinasoft International, Ltd.
|Authors:||Alice Y. Gao - China Standard Software Company, LTD & Chinasoft International, Ltd.
Shuo Li - Ericsson
|7.5||A High Performance Asynchronous Floating-point Adder with BBD Protocol|
|As the demand of requirement of the floating-point numbers increasing, it’s important to find a way to quickly handle floating-point numbers, including addition and multiplication. In this article, we present a floating-point adder by asynchronous design methodology. The asynchronous control unit is the click circuit, which bases on two-phase handshake protocol. And the algorithm contains three blocks, namely pre-normalization, addition, and post-normalization, which are managed by the click pipeline. Comparing with the IP core of the synchronous floating-point adder provided by Xilinx, this type of design can be faster, and smaller-consumption.|
|Speaker:||Pengfei Li - Lanzhou University
|Authors:||Pengfei Li - Lanzhou University
Anping He - Lanzhou University
Caihong Li - Lanzhou University
Zhihua Feng - Institute 706, The Second Academy China Aerospace Science & Industry Corp.
|7.6||Challenges on Block to Sub-system Level UVM-based Verification and Best Practice in LTE Modem|
| When developing a high level sub-system UVM-based verification from block level testbench, a lot of challenges should be overcome to achieve high-quality. This article will analysis the challenges, present the corresponding useful solutions in real LTE modem verification practice, and introduce the outcome in the end.
The common used approach in building sub-system level UVM testbench is bottom-up, while it will visit the following challenges: Verification Plan, Testbench architecture, Coding guidelines;
Significant user guidance are leaned from the practice in LTE modem sub-system verification, including two aspects: bench reuse consideration for block level DV, experiences and suggestions for sub-system DV. With these guidance, user can develop sub-system level UVM bench that maximize reuse the existing block level bench.
Block level reuse consideration, to solve
|Speaker:||Wenping Guo - MediaTek, Inc.
|Author:||Wenping Guo - MediaTek, Inc.
|7.7||A Complete Scalable Solution for SoC Low-power Verification|
|Modern SoC is becoming more and more complex, especial for the SoC low-power design which has more and more power domains and power states, and which brings a big challenge to the SoC low-power functions for a qualified verification (This paper focus on the dynamic verification for the low-power functions). To verify the low-power functions completely, it’s necessary to active the power-on devices under test (DUTs) randomly when changing the low-power states randomly in one pattern with complete checkers and coverage. This paper would introduce a complete scalable solution and platform for the complex low-power verification with little effort.|
|Speaker:||Gaoxue Xu - Unisoc Communications, Inc.
|Authors:||Gaoxue Xu - Unisoc Communications, Inc.
Henry Chew - Spreadtrum
Mingjie Wu - Spreadtrum
|7.8||Comprehensive Verification of Register Protection for ASIL D|
|Automotive Safety Integrity Level (ASIL) is a risk classification scheme defined by the ISO 26262 - Functional Safety for Road Vehicles standard. Different safety mechanisms are adopted by safety related projects to accommodate ASIL levels. ASIL D is the highest level which the single point fault metric (SPFM) should be >=99%. There are kinds of safety mechanisms to be used in the chip, such as Lock-Step, Memory ECC, Bus ECC, Dual Rail, Register Protection EDC, .etc. In this paper, we develop a comprehensive verification to verify the implementation of register protection. The basic verification methodology for register protection is also similar with memory ECC protection. However, there are obvious different verification characteristics which are challengeable to verification: a. Huge number of registers b. Fault Injection method c. Completeness of verification d. Automation In this paper we will try to give best practice on the above cahllenge.|
|Speaker:||Fenglin Guan - Synopsys, Inc.
|Authors:||Shiwang Ye - Synopsys, Inc.
Fenglin Guan - Synopsys, Inc.
|7.9||A SystemC RTL Co-sim Platform for Architecture Exploration|
|SoC system is becoming more and more complex, a wide variety of bus masters are joining the SoC chips, such as processor, GPU, a variety of video, modem and AI hardware accelerators and so on, which cause system RTL simulation time increases to an unacceptable level. However, architecture exploration requires high simulation speed to allow repeated simulation after parameter adjustment until arrives a satisfied point. The convenience of the performance simulation platform is also urgent since the large number of parameters of SoC need to be adjusted frequently and a lot of performance data requires efficient maintenance and processing. We developed a platform with a group of systemc models to do co-simulation with RTL parts which achieved both the simulation speed up and enough simulation accuracy and can dump a variety type of performance data dynamically. A script terminal developed by python and pandas can processes and summarizes the data.|
|Speaker:||Chenguang Guo - Allwinner Tech.
|Authors:||Chenguang Guo - Allwinner Tech.
Yibo Liu - Allwinner Tech.
Hao Feng - Allwinner Tech.
Zhe Chen - Allwinner Tech.
|7.10||Using Machine State Snapshots to Aid Verification|
|Debugging can be a very time-consuming procedure in instruction-level constraint random verification. A major contributing factor is that for some instructions there is a gap between their execution finish points and check points so that their results can influence the following instructions, which makes the error report confusing. A common process of analyzing this kind of error is to trace the instruction chain which can be really complex for a large-scale DUT with a large instruction set. A method of solving the problem is to check the results of every instruction on time. But this can be very hard for some hardware designs and sometimes can considerably increase the complexity of the verification environment. Our solution is locating the instruction which results in errors automatically by using machine state snapshots and we use a tree-like structure to store the snapshots. Snapshot is a widely used backup technology. In this paper, we claim that using snapshots to copy machine states such as the values stored in RAM and registers can improve the verification efficiency. By using machine state snapshots we can recover the simulation process approximately or exactly. The snapshot contents can be user-defined only if it can be used to recover the simulation process. Also,the recovering method can be user-defined only if it can recover the error.We attach a time stamp to each snapshot so that the sequence of snapshots can tell the verification engineers how the machine state changed through the execution of instructions. A verification engineer can compare the snapshots of different time points to judge whether the behaviors of the DUT was rational. Besides, the snapshots can also record some key timing information which is a compensation for coverages. Based on the recovering idea, we have built an automatically bug-locating tool which uses a set of algorithms to accelerate the process to find the instructions resulting in the error recorded by the checker.The tool find the the error-triggered instruction step by step with a combination of the binary search algorithm and the recovering process. Also, we consider three different strategies which are recovering from the initial state, recovering from the synchronization point and recovering from any point. We analysis the time complexity of the three algorithms. Additionally, we introduce an address-based search algorithm which traces the error-triggered instruction with their address dependencies. We build a tree to trace the instruction dependencies and check the preceding instructions. The check method the tool uses is to compare the snapshots of the finish point of the selected instructins of the hardware and the finial state reference provided by the reference model. At last, we introduce a verification strategy recommender system which uses a decision tree to recommend the best verification strategy according to the complexity of the DUT and the estimations of the time consumed by automatically bug-locating process and the time spent by the verification engineers. The decision tree can upate its parameters according to the information gathered in verification process and given by users. As an example, we use the tool to discover an error resulted from a data race and locate the error-triggered instruction automatically. The future work may relate to the verification of multicore systems which can be much more complex.|
|Speaker:||Yuzhe Luo - Institute of Computing Technology, Chinese Academy of Sciences & Univ. of Chinese Academy of Sciences
|Authors:||Yuzhe Luo - Institute of Computing Technology, Chinese Academy of Sciences & Univ. of Chinese Academy of Sciences
Xin Yu - Institute of Computing Technology, Chinese Academy of Sciences
|7.11||Complex VIP Environment Reuse to Adapt Changeable Levels of Verification|
|Abstract—Complex VIPs such as USB3, PCIE and SDIO are not only used for the corresponding IPs function verification but also for subsys and SOC level verification. Repetitive work of VIP integration consumes a lot of manpower, which provokes us to construct a reusable approach to adapt to each level verification. This paper demonstrates a solution, which vertically reuse the VIP sub-environment for three level verification, to realize bottom-to-up efficiently integrate the VIPs and reuse its stimulus. This approach support different level verification through parameters which does not rely on defines and provide a flexible constraint of the VIP features. Besides, different type VIPs and multiple VIPs of the same type merged together are realized. The analysis of this methodology and experimental experience in the project will be present in this paper.|
|Speaker:||Meisong Zhu - Spreadtrum
|Authors:||Meisong Zhu - Spreadtrum
Xufeng Zhang - Spreadtrum
Henry Chew - Spreadtrum
|7.12||Multi-mode Analysis for Clock Domain Crossing and Reset Domain Crossing Verification|
|SoC designs are becoming more intricate from one generation to the next. Accuracy, quality of the results, performance and reducing the efforts involved during result verification are the prime requirements that revolves around the Verification Process. Multi-mode analysis provides capability to identify and configure the set of operational modes for the designers and verification engineers. Identification of operational modes, reduces the complexity of a rather intricate design, one with huge number of violations and cautions, into groups relevant to the operational modes of the design. Running clock domain crossing analysis and reset domain crossing analysis in the identified operational modes return more accurate results, with eliminated pessimism, and with better performance in terms of reduced man effort during result validation (compared to result validation being carried out on complete design at the same time) and turnout time|
|Speaker:||Apoorv Aggarwal - Advanced Micro Devices, Inc.
|Author:||Apoorv Aggarwal - Advanced Micro Devices, Inc.
|7.13||A FSM-random Based Verification Method|
|IPs, such as USB, PCIE and SATA etc., all have a complex FSM jump mechanism. How to cover every path and some corner paths in FSM has become a difficulty and a key point for the IP design verification in the complex scenario.|
|Speaker:||Hongji Wan - MediaTek, Inc.
|Author:||Hongji Wan - MediaTek, Inc.
|7.14||Scaling Formal Connectivity Checking to Multi-billion Gate SoCs with Specification Automation|
|Connectivity checking is a popular formal verification application. Formal tools can automatically generate assertions using a specification table as input and prove them exhaustively. Simulation-based verification, on the other hand, requires significantly more effort while providing a fraction of the coverage. However, chip complexity is rapidly increasing. Many ASIC and FPGA projects have hundreds of thousands of deep connections to verify. The computational challenge is enormous. Furthermore, creating the connectivity specification is a time-consuming, error-prone task. The most recent papers on formal connectivity checking report results on designs of up to 200 million gates, with up to 132 thousand connections proven. This paper presents an innovative approach to addresses both specification and computational challenges, and scale formal connectivity checking to previously intractable problems. Results are reported on a multi-billion gate SoC fabric in the latest technology node with over 1 million connections to specify and verify.|
|Speakers:||Sasa Stamenkovic - OneSpin Solutions GmbH
Nicolae Tusinschi - OneSpin Solutions GmbH
|Authors:||Imtiyaz Ron - Xilinx Inc.
Sasa Stamenkovic - OneSpin Solutions GmbH
Sergio Marchese - OneSpin Solutions GmbH
Nicolae Tusinschi - OneSpin Solutions GmbH
|7.15||Making Testbench Synthesizable to Adapt Palladium XP for Achieving Best Emulation Speed|
|Any RTL revise closed to freeze leads to the lack of time for adequate regression. The birth of emulator, Palladium XP (PXP) for instance, gives a solution to fix this problem. Its simulation acceleration mode, which enables simulator to cooperate with emulator, not only boosts regression speed, but also saves the transaction based, layered and maintainable methodology, such as currently popular UVM. However, frequent software and hardware interactions and complicated randomization constraints used in UVM testbench lower the upper limit of acceleration effect due to the nature of PXP . In this paper, a synthesizable structure is introduced aiming to maximize the emulation speed based on PXP platform. This usage mode makes nearly every bench component able to be accelerated and reduce the influence from software and hardware interaction and constraint solving. Similar to UVM structure, it also keeps a layered structure which makes it easy to be maintained.|
|Speaker:||Deyong Yang - Unisoc Communications, Inc.
|Authors:||Harold Zhang - Cadence Design Systems, Inc.
Deyong Yang - Unisoc Communications, Inc.