Hybrid Verification of Protocol Bridges

Please download to get full document.

View again

of 8
6 views
PDF
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Document Description
It's usually necessary to apply formal verification on very small modules or else be content with bounded proofs on realistically large modules. But there is no denying that despite its capacity problems, formal verification has its strengths and
Document Share
Document Tags
Document Transcript
  Hybrid Verification ofProtocol Bridges Praveen Tiwari and Raj S. Mitra  Texas Instruments & T HE PROBLEM WITH  verification by simulation iswell-known: It is inherently incomplete because of itsinability to cover all possible scenarios. Not only is itphysically impossible to enumerate all test cases for a real-life complex design, but attempts to completelycover the functional space through directed randomtestingalsosometimesfailintheirobjectivebecauseofthe huge simulation runtimes they entail. As a result,functional verification by simulation usually does notlead to definite verification closure.Formal verification, on the other hand, holds thepromise of‘‘complete’’verification. By proving a set ofassertions on the design, formal verification covers allpossible scenarios for arriving at those assertions,thereby guaranteeing the quality and closure ofverification. Unfortunately, though, current state-of-the-artformal-verificationtechniquesrestrictthesizeofthe designs that can be handled. The core formal-verification engines can typically handle about 200state elements at a time, and with suitable abstractiontechniques this limit can increase to between 300 and3,000 state elements. This severe restriction on sizetypically limits the applicability of formal verificationfor proving end-to-end properties on real-life designs,whose size is often in the range of 5,000 to 10,000 stateelements—and this includes FIFO buffers and shiftregisters, which usually significantly increase the proofcomplexity of formal verification.To overcome this problem with formal verification,the common textbook solution is to partition thedesign into smaller blocks and verifyeach one separately by applyingassume-guarantee reasoning at their boundaries. 1,2 Although conceptuallysimple, this approach has practicallimitations. First, if the module isimportedfromathird-partysource,thenwhite-box information is usually notavailable. So, partitioning isn’t possible,and only end-to-end properties can be written andpotentially proven. Even if the module’s white-boxinformation is available, in highly interleaved andcomplex control logic, heavy dependencies existbetween the different design components. In suchcases, even if a partitioning is done, writing theconstraints to represent the boundary assumptionsalmost amounts to including the functionality of theother modules in their full complexity, therebydefeating the srcinal purpose of partitioning.Hence, it’s usually necessary to apply formalverification on very small modules or else be contentwith bounded proofs on realistically large modules.But there is no denying that despite its capacityproblems, formal verification has its strengths andutility and is a complementary technique to simula-tion. Used judiciously, simulation and formal tech-niques can complement each other’s strengths,thereby balancing their respective weaknesses. Thereare different ways to combine these two techniques(see the ‘‘Hybrid verification techniques’’ sidebar). Inthis article, we present two case studies of theapplication of one such technique: the hybridverificationofaserialprotocol.Wechosethisexamplebecause of serial protocols’ added complexity com-pared to parallel protocols. Both case studies involvethe same serial protocol (I 2 C) but demonstratedifferent aspects of its verification. Selection of thesame protocol in both case studies also demonstratesthe tremendous value of reuse in formal verification. 124 Editor’s note: Simulation and formal verification are complementary approaches—oneoffering scalability and the other ensuring completeness. The authorsdescribe a hybrid framework that uses both approaches for protocol bridgeverification. In two case studies, they demonstrate that the hybrid frameworkcan find bugs that each individual approach alone cannot find.— Jayanta Bhadra, Freescale Semiconductor  Advances in Functional Validation through Hybrid Techniques 0740-7475/07/$25.00  G  2007 IEEE Copublished by the IEEE CS and the IEEE CASS  IEEE Design & Test of Computers  125 Hybrid verification techniques The term  hybrid verification   refers to the interactionof simulation and formal verification, with the goal ofexpanding the scope of formal techniques to largerdesigns—not only in directly applying formal tech-niques, but also in influencing the verification ofmodules that are not directly within its scope. Thisinteraction broadly falls into three categories: &  Simulations help formal verification  . This includeseither using simulation traces to infer invariantsabout a module 1 (which can then serve as con-straints when applying formal techniques), or usingsimulations to bring the module to some particularstates and then applying formal verification. &  Formaltechniqueshelpsimulations  .Thisinvolvesusingformal techniques to amplify the coverage gained fromsimulations, and using simulations and formal tech-niques in an interleaved fashion. Here, the sameconstraints used in simulations are reused in theformal-verification phase, thus requiring a commonverification setup. &  Simulations and formal techniques interact during verification planning  . At the highest level of in-teraction, simulations and formal techniques interactat a semantic level during the verification planningstage (when the modules are partitioned intodifferent categories of verification according to anassessment of the type and size of the modules)and at the reporting stage (the results of thesedifferent techniques must be in a uniform format andhave the same semantic content, so that they canbe collated together into a common verificationreport).Automated hybrid approaches to formal verificationinvolve using constrained random simulation to pickstarting states for formal-proof engines. 2,3 In thespecific domain of target-driven logic simulation, oneof the first efforts is that of Yang and Dill, 4 which directsa random simulator to hit a goal by enlarginga verification target through backward traversal (pre-image computation), so that the simulator can hit any ofthe states in the enlarged target. Pre-image computa-tion for realistic designs can be very expensive andcannot be used for more than four to five time steps.Kuehlmann et al. present a probabilistic guidingalgorithm that assigns values to design states on thebasis of their estimated probability of leading to thetarget state. 5 Researchers have also suggested ap-proaches that describe techniques for reaching a targetby exploring a range of potential next states ina simulation environment on the basis of cost functionanalysis and automatically generated lighthouses. 6–8 The effectiveness of these approaches relies heavilyon the quality of the testbench. For protocol bridgesinvolving serial protocols, such automated hybridapproaches would need extensive randomized simula-tion to hit corner cases. The technique described in thisarticle uses simulations to bring the module to someparticular states and then applies formal verification.But in this case, the initial state and system partitioningare not automated, but are determined by the user. References 1. S. Hangal et al., ‘‘IODINE: A Tool to Automatically InferDynamic Invariants for Hardware Designs,’’  Proc. 42nd Design Automation Conf.  (DAC 05), ACM Press, 2005,pp. 775-778.2. K. Albin et al., ‘‘Constraint Synthesis for EnvironmentModeling in Functional Verification,’’  Proc. 40th Design Automation Conf.  (DAC 03), ACM Press, 2003, pp. 296-299.3. E. Pastor and M.A. Pena, ‘‘Combining Simulation andGuided Traversal for the Verification of ConcurrentSystems,’’  Proc. Design, Automation and Test in Europe Conf.  (DATE 03), IEEE CS Press, 2003, pp. 1158-1159.4. C.H. Yang and D. Dill, ‘‘Validation with Guided Search of theState Space,’’  Proc. 35th Design Automation Conf.  (DAC98), ACM Press, 1998, pp. 599-604.5. A. Kuehlmann et al., ‘‘Probabilistic State Space Search,’’ Proc. Int’l Conf. Computer-Aided Design   (ICCAD 99),IEEE CS Press, 1999, pp. 574-580.6. M. Ganai, A. Aziz, and A. Kuehlmann ‘‘EnhancingSimulation with BDDs and ATPG,’’  Proc. 36th Design AutomationConf.  (DAC 99), ACM Press, 1999, pp. 385-390.7. P. Yalagandula, V. Singhal, and A. Aziz, ‘‘AutomaticLighthouse Generation for Directed State Space Search,’’ Proc. Design, Automation and Test in Europe Conf. (DATE 00), IEEE CS Press, 2000, pp. 237-242.8. S. Shyam and V. Bertacco, ‘‘Distance-Guided HybridVerification with GUIDO,’’  Proc. Design, Automation and Test in Europe Conf.  (DATE 06), IEEE CS Press, 2006,pp. 1211-1216. March–April 2007  Protocols and their verification Verification of modern-day protocols (OCP, AXI,I 2 C, and so on) is a big challenge because of complexfeatures such as extensive pipelining and highlyconfigurable modes. 3–5 This task grows more difficultfor serial protocols, in which the data and control bitsare intermingled and for which there is sometimes noexplicit clock. Here, the state of the protocol trans-action at any instant is a function of all the previousbitstransferred onthebus, and each transaction spanshundreds of cycles. Maintaining the entire history inthe verification environment greatly increases thecomplexity of the verification process. In addition,several formal-unfriendly components such as coun-ters, FIFO buffers, and shift registers are necessary tohandle serial protocols, thereby increasing the com-plexity of formal verification. Modeling a protocol’s properties Efficient modeling of protocol properties is the firststep toward successful formal verification. This in-volves efficiently coding the properties and structuringthem into groups or abstraction layers, such that in theverification of a higher layer it can be safely assumedthatthebehaviorofalowerlayerthathasalreadybeenverified is correct. This set of assumptions significantlyreduces the data processing load at each layer and isanother way of looking at assume-guarantee reason-ing.Most protocols, regardless of their unique features,follow a common structure and hierarchy in their dataprocessing and control passing. This hierarchy iscommonly referred to as the OSI (open systemsinterconnection) reference model. Thislayering isn’t just an abstract concept.From a designer’s perspective, it servestostructuretheproblemintoahierarchy.From a verification perspective, thishierarchy makes it possible to use thesame structure to partition thepropertiesinto separate layers by reconstructingthe data in a bottom-up fashion andbuilding up the protocol, going forward.Figure 1 illustrates this bottom-up recon-struction of data, and forms the founda-tion for our methodology for protocolvalidation. Layering the properties andsequencing their proofs cuts down theverification state space, reduces turn-around time, and thus speeds up overall verification.Coding and using properties in our bottom-upprotocoldataanalysisapproachinvolvesthefollowingsteps: &  For each layer, use generic and small FSMs for assertion writing. &  FSMs for higher layers use output flags of FSMsfor lower layers. &  Prove assertions for the bottommost layer. &  Use proven assertions as constraints (this isanother way to look at assume-guarantee rea-soning). &  Subsequently, prove assertions for higher layers. Modeling abstractions Formal verification is all about using differenttechniques and modeling strategies to reduce thestate space so that the tools can run effectively. Theformal tool’s internal (automated) abstraction tech-niques can go only so far in making this reduction.Great effort is also needed to efficiently model thedesign and its properties so that the state space size ismanageable. One feature requiring such modeling isa FIFO buffer.FIFO buffers are present in most protocol bridgesand especially in bridges for serial protocols. The hugenumber of state elements and the double-counting(that is, up-down counting) mode in FIFO buffers isprohibitively expensive for formal-verification runs.One simple way to reduce the number of stateelements is to scale down the depth and width ofthe FIFO buffers. But sometimes this approach is not 126 Figure 1. Bottom-up protocol data analysis. Advances in Functional Validation through Hybrid Techniques IEEE Design & Test of Computers  feasible: Either the parameters (especially the depth)cannot be reduced in isolation from other compo-nents of the design, or a reduction in the size couldcompromise the completeness of verification.In our approach, we use an efficient hybridtechnique to verify FIFO buffers. We feed in  n  datavalues to the FIFO buffer in a round-robin fashion, andcheckifthesevaluescomeoutoftheFIFObufferinthesame sequence. Here,  n  is the smallest integer thatdoes not divide  N  , the depth of the FIFO buffer. For example, for a FIFO buffer of depth 64,  n  would be 3,and we would feed the FIFO buffer with the followingsequenceofdata:1,2,3,1,2,3(22times).Restricting n to a small value also cuts down on the width of theFIFO buffer, so the higher-order bits are tied toa constant value. If an overflow or underflow bug ispresent in the FIFO buffer, the output data will deviatefrom the round-robin behavior. However, this ap-proach doesnot provideacomplete verification oftheFIFO buffer, just as no hybrid technique can providea complete verification solution.There are many other abstraction techniques thatcan make formal-verification tools work on real-lifedesigns, each innovative and ad hoc in its own way.But in principle, the main approach in formalverification is to make the problem smaller bypartitioning—through structural, functional, and tem-poral means. There is no hard set of rules thatdetermine which technique to apply and when. Theverificationengineermustdiscriminateinselectingtherelevant technique from an available repertoire,relying on a deep knowledge about the design. Hybrid verification As discussed earlier, neither simulation nor formalverification, when applied separately, yields thedesired goal of verification. A mix of these comple-mentary techniques—a hybrid approach—involvesfunctionally and temporally partitioning the moduleand applying the two techniques in an appropriatesequence.Internally, verification tools can mix these tech-niques in various ways (as the ‘‘Hybrid verificationtechniques’’ sidebar explains), but from a methodolo-gy perspective and to stitch a hybrid flow using thediscrete tools, the basic strategy is as follows: We usesimulation to guide the system to some known states,and then we use formal verification in each of theselogical partitions of the system to attain the coveragegoals.Typicalscenariosinwhichadirectedsimulationcan bring the design to a known state includeemploying complex register programming (softwareconfiguration), applying a functional reset sequenceafter system reset, fixing the modes of operation, andloading the FIFO buffer with a predetermined patternof data. Formal verification can then take place, withappropriate pin constraints for the portion of thefunctionality that starts after this known state, asFigure 2 shows. There are, thus, four parts to thishybrid methodology: &  performing functional and temporal partitioning, &  applying directed simulation to bring the systemto some desired states, &  constraining the pins and signals that are nolonger needed, and &  applying formal techniques.We can apply this sequence iteratively, over differentsegments of the design and over a large set of initialstates. We can automate the flow for a large system byselecting different initial states of interest. Functional partitioning Functional partitioning is a compositional-reasoning technique in which a larger concurrent or sequential system is analyzed by reasoning about theindividual functional pieces. In structural partitioning,the idea is to partition a design into a set offunctionally independent and spatially disjoint com-ponents, whereas in functional partitioning the idea is 127 Figure 2. Bringing a design to a known state. March–April 2007  to partition the functionality itself into mutuallyexclusive logical partitions. The major steps are asfollows:1. Identify the mutually exclusive functional parti-tions (such as the different modes of operation)that the design supports.2. Identify the interaction between these parti-tions—that is, whether they do not imposerestrictions on each other (for example, in-dependent read and write operations) or mustfollow a sequence (for example, a read-exclusive operation in the Open Core Protocolshould always be followed by a write operation).3. Identify the shared resources that play a role inone or more partitions.From step 2, if the interactions are independent,then we use input restrictions (for example, pin tie-offs) to enable a particular partition for validation. Ifthe interactions are sequential, then we must either write constraints for each of the specific partitions andglue them together sequentially or use directedsimulations to reach the particular partition we wantto validate, thereby bypassing the intermediate parti-tions. Case study: ARM7-I 2 C bridge This case study involves the verification of a two-stage bridge used for data transfer from an ARM7processor to a serial I 2 C bus, and vice-versa (seeFigure 3). A dense, nonstandard interface connectsthe two stages. For our purposes here, it suffices toknow that this dense, ill-documented legacy interfaceprevented the isolation of these two stages, and asa result the entire bridge had to be treated as a blackbox, and thus verified as a whole.The ARM7 stage had two FIFO buffers, each ofdepth 8—for transfer and receive paths. In addition,there were several configuration registers for configur-ing theblockasmaster/slave,receiver/transmitter, andso on. For the other stage, I 2 C is a standard serialprotocol, in which each single-byte transfer spansseveral hundred clock cycles. The total flop count ofthe entire two-stage bridge system was 805. Theverification target for the block consisted of protocolchecks, data integrity checks, and arbitration checksunder various modes of operation.We coded the properties for protocol checking ina layered fashion for I 2 C protocol validation. Weinitiated verification using simulation to configure thebridge as a master (or slave) by programming theinternal configuration registers via a 32-bit-wide ARM7interface. After completing this operation mode pro-gramming,we conducted further analysis usingformaltechniques. We performed protocol compliancechecks for both the transmission and the receptionmodes. We also carried out similar protocol compli-ance checks for the parallel interface.While checking for data integrity, we noted that inthetransmitmode(ARM7-to-I 2 Cdatatransfer),thehostsoftwareregulateddataflow.Furthermore,thesoftwarewas responsible for ensuring preventionof overflow or underflow conditions. Hence, we filled up the FIFObuffer with a specific sequence of data using directedsimulation.Wethenswitchedverificationtotheformaltechniques and checked for the same sequence at theoutput. However, for the receive mode, no suchsoftware control was possible, so we wrote further assertions to detect potential overflow or underflowbugs.Modeling the arbitration of a serial protocol wasa very complex problem. There was not muchconstraining possible on the serial input behavior,because both bus lines were expected to behavearbitrarily. Moreover, environment modeling had toaccount foran aspect of thedesign behavior in which,after losing arbitration, reconfiguration of the registerswas necessary to initiate further transfers. Thisprevented constraining the registers to static valuesbefore beginning formal analysis.We used a hybrid technique in an innovativemanner here. First, we wrote a property to model thearbitration loss.Thispropertywasmadetofail,andtheformal tool generated a counterexample, indicatinga scenario for arbitration loss. Next, we used thiscounterexample in a simulation to go to this lost state,and we proved properties for arbitration recovery.Without this hybrid technique, the depth of thecombined loss and recovery scenario would be toogreat for a formal tool to handle. 128 Figure 3. ARM7-I 2 C bridge. Advances in Functional Validation through Hybrid Techniques IEEE Design & Test of Computers
Search Related
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks