a. Is it a good practice to create the model within the scoreboard itself ?
b. Or can I create a predictor-model separately and output the results to the scoreboard ? So that the scoreboard gets the results from the predictor model and as well from the DUT, so that the checking alone can be done inside scoreboard ?
Either might be right, it depends on the DUT. I don’t believe there is any best practice on this choice of scoreboard design, in my experience with real world designs it is rare to be able to separate the predictor and scoreboard comparison like (b), that only seems to work for textbook examples. Most designs have complex temporal interactions and/or aggregation of data items from different sources, and the comparison needs to be done in the middle of that with a customized data structure like (a). Assertions can help - both low-level ones to save replicating cause and effect as procedural code, and high-level ones wielding rules on abstracted events above a layer of procedural model code.
Read more about various Scoreboard design considerations in my DvCon 2012 paper on Scoreboard Architecture.