Автобиография
Реферат
Библиотека
Ссылки
Отчет о поиске
Индивидуальное задание

The 'what and why' of transaction-level modelling (TLM)

Bryan Bowyer

Источник: http://www.eetimes.com/showArticle.jhtml?articleID=183700982

Advances in the physical properties of chips and in design tools make it possible to build huge systems into just a few square millimeters. The problem is that modeling these systems at the register-transfer level (RTL) is labor-intensive, and simulation run-times are so long they have become impractical. If this is a problem today, just imagine trying to design, integrate and verify the even more massive systems we will build 10 years from now.

Transaction-level models (TLMs) can help with design, integration and verification associated with large, complex systems. TLMs LET designers model hardware at a higher level of abstraction, helping to smooth integration by providing fast simulation and simplifying debugging during integration.

Designers start with parts at different levels of abstraction, often including algorithmic models written in pure ANSI C++. These models combine with a detailed specification of how they should be brought together into a system. Then the models are divided among several design teams for implementation into RTL. Other pieces--often most of the system--consist of existing blocks reused in the new design.

Algorithmic synthesis tools help RTL designers quickly implement new, original content for blocks. This creates a fast path from a collection of algorithms to a set of verified RTL blocks that need to be integrated. But any errors or misunderstanding in the specifications for the systems or for the intellectual-property (IP) blocks will still lead to a system that doesn't work.

Transaction-level models could simplify the integration and testing, but where to get the models? Attempts to manually create TLMs in SystemC by adding hardware details to the pure ANSI C++ source are often as error-prone and time-consuming as manually writing RTL.

While this effort is certainly justified for reusable blocks, someone still has to maintain these models. For the original signal-processing content, however, the best approach is for the algorithmic synthesis tool to simply generate the TLM models as part of the design and verification flow.

An added benefit of this approach is that system modeling and integration can now be used to refine each block in your system. Information gathered during integration is fed back into the algorithmic synthesis flow, allowing blocks to be reoptimized based on the system.

In RTL, simulation is usually synchronized based on a clock. Every event on the clock results in a point in time where all of the blocks in the system can synchronize. In a TLM, the synchronizations occur when data is communicated between two blocks, which means the clock is no longer needed for simulation. This communication is called a transaction.

A transaction is an aggregate of activity that occurs in a system in a bounded time period. The activity of interest begins at a particular time and ends some time later. All the operations, state changes, data movements and computations that occur in a particular design unit or between two units are transactions.

Transaction-level models represent components as a set of concurrent, communicating processes that calculate and represent their behavior. The models describe complex systems at a high level of abstraction. Models exchange communication as "transactions" through an abstract channel, separating communication from computation. Working at this higher level of abstraction speeds simulation--up to 1,000 times faster than RTL or cycle-accurate modeling. By modeling at the transaction level early in the design cycle, designers can find an optimal architecture before committing to low-level details of a complete implementation. During functional verification, engineers can reuse the TLMs to ensure that the detailed design is equivalent to the RTL implementation.

TLMs are an integral part of a progressive refinement strategy. Unlike RTL, TLMs encompass multiple abstraction levels from cycle-accurate to untimed token models. Verification tools then allow RTL and transaction-level models to be combined in the same simulation.

For faster simulation, the design team can use higher-level TLMs with minimal details. refining the models over time to include more information.

Called progressive refinement, this process incrementally adds structure, concurrency and parallelism for transaction-level, behavioral and RTL models. At each level, the design is simulated and verified, helping to find bugs and optimization opportunities long before RTL synthesis.

Typically, progressive refinement requires several steps. Once the design is captured at the algorithmic level in pure ANSI C++, a transactional-level model is created that is untimed and does not define the bus, leaving it generic. After the architecture has been validated, the next level of detail is to describe the bus architecture and add some approximate timing values.

To move further down in abstraction, the designer typically expands the model to be cycle-accurate and to accommodate word transfers. Once the design is verified down to this level, the design team usually has confidence in the design's quality and can commit to synthesizing to the RTL.

After the RTL is created, modern verification tools allow design groups to continue to use the TLM to verify that the RTL is correct. This incremental approach offers a highly effective way of uncovering bugs for large blocks, including the CPU, embedded memory, digital signal processing and third-party IP in ASIC or FPGA designs. Unfortunately, progressively refining transaction-level models is largely a manual, extremely time-consuming process. Moreover, misinterpretations are introduced as models are hand-coded, creating a verification and model-maintenance nightmare. Automating this process would reduce coding errors and errors resulting from improper interpretation of specs, leading to more-productive TLM methodologies.

Signal-processing TLMs

Eager to adopt transaction-level modeling, designers are looking for ways to streamline the generation of these models. Fortunately, advanced algorithmic synthesis has solved the problem of creating models for one very important area of design: signal-processing hardware.

Algorithmic synthesis tools automate TLM and RTL model generation from a single, concise source. Automatically, the same high-level description consistently verifies from C to RTL, including high-speed TLMs, ensuring that the system engineer's intent is preserved.

Algorithmic synthesis tools automatically generate transaction-level models from a pure ANSI C++ description, adding structure, parallelism and concurrency to create models at various levels of abstraction. These SystemC and SystemVerilog models, with their hierarchy and parallelism, provide design teams with powerful options for system-level verification. The interface of the SystemC model has the same behavior as the RTL generated by synthesis tools, but is optimized to simulate much faster.

Instead of manual progressive refinement, algorithmic synthesis methodologies automatically add hardware details to the algorithmic C++ model to generate a cycle- and bit-accurate behavioral SystemC model. That means designers can more rapidly explore and verify architectural trade-offs and achieve faster verification of their optimized designs while reusing existing C++ and SystemC testbenches throughout the flow. This methodology lets designers deliver an optimized, error-free implementation of the signal-processing hardware without compromising time-to-completion goals for this block of the design.

Essentially, the pure ANSI C++ models act as a "golden source." Interface synthesis and sequential-to-structural transformations are employed to automatically generate SystemC or RTL hardware descriptions without changing the original pure ANSI C++ source. The result is behavioral SystemC models that simulate 20 to 100 times faster than RTL, with the eventual potential of generating more-abstract transaction-level SystemC models that simulate more than 1,000x faster than RTL.

The interface synthesis technology of algorithmic synthesis methodologies also automatically generates transactors that synchronize timed RTL with a sequential or transaction-based test environment. Transactors, which can be as complex as the block itself, allow designers to use a single sequential C++ or SystemC-based testing environment for the entire design flow.

A testbench can also automatically compare the C++ input to the RTL output, providing debug information for specific synchronization points in the case of a simulation mismatch. Again, these capabilities let designers use or reuse sequential C++ descriptions and testbenches to generate technology-specific hardware without modifying the algorithmic model.

Transaction-level modeling is a powerful technique for verifying complex systems at a high level of abstraction. The transaction-level model can be used for debugging, and for collecting and distilling performance and verification coverage information.

By Bryan Bowyer, technical marketing engineer in Mentor Graphics Corp.’s high-level synthesis division (bryan_bowyer @mentor.com).

Courtesy of the EDA News section of eetimes.com.



Магистр ДонНТУ Смешков Александр Сергеевич