“...just doing things the way you’ve always done them is not the path to success, especially when your competition is changing the game.”
I am a big fan of the television show Jeopardy! and, like many of you, I have been intrigued by James Holzhauer, who, as of this writing, is now the second-winningest contestant in the show’s history. During his current 22-game winning streak (Ken Jennings won 74 consecutive games in 2004), Holzhauer has amassed $1,691,008 in winnings (vs. $2,520,700 for Jennings), including the top 11 single-game totals in the show’s history. For those of you not familiar with the show, it is a general-knowledge quiz show consisting of two rounds in which players choose from six clue categories, each of which has five clues, which are assigned increasing dollar values from top to bottom. Players are awarded the dollar amount for a correct response, and given the next choice of clue, or they have that amount deducted for an incorrect response. The third round, called “Final Jeopardy,” consists of a single question on which contestants can wager as much as they wish.
Holzhauer has achieved his amazing success through two complementary approaches. The first is his analytics-based approach to the game. Instead of starting at the top (lowest-value) clue in a category and proceeding down through the same category, as most players tend to do, he starts with the bottom higher-value clue of a given category and proceeds across the bottom of the board to accumulate large amounts of money very quickly. Since the “Daily Double” squares (which allow the contestant to bet as much money as they wish) tend to be towards the bottom of the board, Holzhauer often is able to further enrich his total before his opponents have really gotten started. Of course, the other key to his success is that he has answered 97% of the questions correctly.
The only comparable Jeopardy! performance I can recall is watching Bill Murray in the movie Groundhog Day where he knows all the answers because he has seen the same episode over and over again and has obviously memorized the answers. I think both of these examples show that just doing things the way you’ve always done them is not the path to success, especially when your competition is changing the game. So, with this thought in mind, we’ve assembled a great DAC edition of Verification Horizons for you, with each article showing how you can take a new approach to some aspect of functional verification.
We start this issue with “SystemC FMU for Verification of Advanced Driver Assistance Systems” from my Mentor colleagues, Keroles Khalil and Magdy A. El-Moursy, over in Egypt. They discuss how to package Virtual Platforms with real software to create functional mock-up units that model pieces of a car. By putting these pieces together using a standard functional mock-up interface, you can assemble a system-of-systems to model and verify complex digital, analog, and mechanical systems, including things as complex as a complete ADAS system.
After that, Rich Edelman will show us how to have “Fun with UVM Sequences – Coding and Debugging.” In his own inimitable way, Rich walks us through a progression of increasingly complex (and previously intimidating) sequences and shows us that they’re all “just code” and nothing to be afraid of. He also shows us how Questa® can help you visualize your transactions to make debug easier.
Next we continue our Portable Stimulus series by Matthew Ballance with “Creating Test the PSS Way in SystemVerilog.” In it we learn how Questa® inFact enables Portable Stimulus (PSS) so you can focus on what you want to verify instead of worrying about how to model it in SystemVerilog/UVM. We’ve packaged some of the critical Portable Stimulus capabilities into PSS Apps that can read in existing SystemVerilog classes and covergroups to give you better verification productivity by getting the most out of your existing testbenches.