The ABCs of Winning at the 2017 Hardware Model Checking Competition

Every year for the last nine years teams of researchers and software developers have come together to compete in the Hardware Model Checking Competition (HWMCC). This contest pits some of the brightest minds in design and verification against each other along with the solvers they have developed. Each team has worked tirelessly over the course of the past year to develop their solver and get it ready for the big day.

Vigyan Singhal (center), President and CEO of Oski, presenting the award at the HWMCC Award Ceremony to ABC member, Yen-Sheng Ho (far right). : Armin Biere, organizer of the HWMCC Competition shown far left.

The competition boasts benchmarks in three categories including single safety property track, liveness property track, and deep bound track of which Oski is a sponsor. The benchmarks come from a combination of missed benchmarks by competitors in previous years’ competitions and from leading companies in industry like Oski that directly relate to the most complex issues of the day. It is this combination of benchmarks that pushes these teams to develop solvers so robust in nature that after the competition many of the attributes are adopted by the large system houses to help tackle industrial strength challenges.

Of all the teams competing in the HWMCC, there is only one team that has consistently taken first place in the single safety property track since they entered the competition in 2008. This year however, they not only won the single safety property track, but also ran away with first place in the liveness track, and the deep bound track. No other team in the competition’s history has ever done that. The team that achieved this is the ABC team from the University of California at Berkeley. Recently, I got a chance to sit down with them to find out their secret to such unprecedented success.

ABC Team from left to right: Yen-Sheng Ho, Prof. Robert K. Brayton, Alan Mishchenko, and Baruch Sterin (not pictured)

The team lead by Prof. Robert K. Brayton along with Alan Mishchenko, Baruch Sterin, and Yen-Sheng Ho is a paradox. Their background isn’t what you would expect from contestants in a verification challenge. Much of their experience has been in logic synthesis. I also found this group of gentlemen to be an extremely humble yet confident team that doesn’t attribute any one person with their success. And their obviously highly intelligent persona comes with a very unexpected quick-witted sense of humor.

2015 was the first year that they participated in the deep bound track in which they didn’t win however on only their second attempt at entering the deep bound track competition in 2017 they were met with raving success. When I asked them, what made the difference this year, I got the simple response, “We didn’t really try last time.” After a brief laugh shared by everyone, more detail was offered up.

True to their humbleness, they down played their secret to success by attributing it to merely better implementation of communication channels within the engines to get real-time updates that allowed for deeper and more robust reporting. Dig a litter deeper though (no pun intended) and you’ll discover that Alan had utilized a new SAT engine called Glucose. The original plan for this solver was to run multiple configurations of it, but in the end, that didn’t prove to be very useful. Instead the team launched 3 different BMC engines – “&BMCs” (a new engine Alan developed and combined with Glucose), “BMC3”, and “,BMC” (developed by Niklas Een). These engines worked with a PDR implementation that allowed for early termination on unsatisfiable instances to free up the machines for other benchmarks. The parallel computing power of these engines is what made it possible to start a bug simplification engine. All the engines were killed as soon as one of them returned with an answer.

Their dedication to the project may be uncharacteristic as well. It turns out that Alan spent a number of hours building and modifying the code to the BMC engines. Most of these hours were spent when traveling to and from Kyoto. Alan often jokes that he likes to use travel as an excuse to focus on this type of work and that without travel he would have a hard time getting it done.

In all seriousness, the team believes the recipe for success comes from the fact that they have worked together for years and have built a trusting chemistry between them. It also helps that they attack the problems from a different point of view than most of their competition. As mentioned earlier, these guys have their roots in logic synthesis whereas most of the competitions’ backgrounds are strictly in verification. Making use of this difference in perspective, better communication channels, and their implementation of extreme parallelism (running the PDR and BMC engines independently but at the same time) was ultimately the secret sauce to their success.

As you can see from the results of the Deep Bound Track contest shown below, ABC was the only team to find all 58 unsolved benchmarks carried over from the Single Track as well as reach them faster than any other team. Their solver was able to go deeper than any of the others.

I asked them about next year’s competition and would other teams be able to replicate their formula especially given that their solver is now publicly available. They jokingly answered, “Well, we planted a few bugs in there so the competition shouldn’t get too comfortable.”

If you or your company would like to donate benchmarks for future competitions, please contact Rob van Blommestein at Oski ( or Armin Biere at Johannes Kepler University (