Basic conceptual questions asked by a recruiter that confuse me as hell

Ok,I have no industrial experience so I am basically a fresher,and hopefully somebody could help me since with all the concepts I learned I didn’t know how to answer his question.

Starting from the very beginning, he asked me to talk about a CPU verification project with Systemverilog, so I told him what I did, generator, scoreboard,monitor and everything, and he then asked me: how did you come up with the tests and constrains to reach 100% coverage?
so I said based on the function of the unit we were testing, it’s black box test so these tests just came from our brain, and blablabla, and we could make it 100% but that doesn;t mean bug free.
He said is there a systematic way of doing that?
I am confused, ok, systematic way, but our teacher only said to use random constrained test and assertions and coverage plan, and they all come from our brain directly, and I already answered him with my previous talk, coverage-driven constrained random test, so he didn’t consider it systematic. Maybe not enough? So I said more about how we came up with coverage groups and why. He still seemed unsatisfied and gave me another example said Intel once designed a multiplier that had bug in it, a certain number as the inputs would generate the wrong result. How will you verify a multiplier is a multiplier instead of a adder?
More confusing.
Since we are talking about functional verification, so I said build scoreboard, build coverage group, and depend on the length of input, test overflow, and so on. We can test the carry propagates, generate seeds within valid range and then he said it would then take 2^32 if you have 16 bits for each input. I say no it wouldn’t since it is not necessary to do exhaustive test, and he asked then how do you ensure 100% coverage.

It’s like a he was not listening to me when i said i wrote the coverage plan and generator to do the test.

So I am here to ask, everyone, what the hell does he mean by systematic way? I am so frustrated by this question.
Help will be appreciated!

In reply to goodice:

This is a trick question.

There is no way to tell you have tested 100% of what needs to be tested. Even a formal tool, which can exhaustively test an arithmetic operation, only tests what you set up assertions to test. The best you can hope for is a meticulous requirement specification.

I think the part your recruiter was looking for was a comprehensive process to take the requirements into a test plan, and then taking the test plan into a coverage plan, and then tracing back the coverage results back to the requirements. In each of those steps, you need to show how a test(s) verifies a requirement. You may want to look at the Coverage Cookbook for detail about how to achieve coverage beyond just writing covergroups.

The Intel FDIV bug is good example of how the Verification team missed the opportunity to setup coverage based on the implementation to find “Interesting” coverpoints.

In reply to dave_59:

Thank you very much for replying and locating the Intel bug the recruiter was referring to. It provides so much help. I think maybe that synopsys is a company making tools, mainly, as the recruiter kept emphasizing, he might wanted me to say something about using script to generate coverage bins.

I have never done that before and it is some knowledge to keep in mind for next time.
Again, thank you very much for helping!