In reply to Farhad:
Each verification technology has its strengths and weaknesses. When talking about formal you hear the term state space used. This is primarily based on the number of state bits in the design. Obviously this is a large number. Briefly, think of simulation as a depth first methodology where you can carve very targeted/specific paths through the state space, ideally tied to a testplan. The downside is it is impossible to cover every scenario so corner case bugs can slip through. Simulation of course is vector based requiring a testbench of some sort (directed, UVM etc.)
Think of formal as a breadth first verification. It is using mathematical methods (no vectors) hence for every cycle of analysis it is covering all input and next state possibilities. You do need to write a property to make use of formal techniques. The great thing about this is it can find the corner case bugs you might miss in simulation. The down side is the analysis is exponential in nature so the algorithms either bog down or you run out of memory potentially on the machine you are running on (not as common for most of today’s servers)
Ideally you use both where each is strong. Chip/SoC level requires simulation/emulation, block level you can leverage formal generally speaking.
There are a number of excellent classes in Verification Academy. Maybe start with the “Getting started with formal verification” course.