File size: 44,783 Bytes
0d00d62
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
{
  "title": "Voting Ensemble Mastery: 100 MCQs",
  "description": "A complete MCQ set on Voting Ensemble Methods — covering hard and soft voting, use-cases, advantages, limitations, and real-world scenario questions.",
  "questions": [
    {
      "id": 1,
      "questionText": "What is the core idea of a Voting Ensemble?",
      "options": [
        "Train a single strong model",
        "Perform dimensionality reduction",
        "Reduce dataset size",
        "Combine predictions from multiple models"
      ],
      "correctAnswerIndex": 3,
      "explanation": "Voting Ensembling combines predictions from multiple models to improve overall accuracy."
    },
    {
      "id": 2,
      "questionText": "What are the two main types of Voting?",
      "options": [
        "Bagging and Boosting",
        "Static and Dynamic Voting",
        "Linear and Non-linear Voting",
        "Hard and Soft Voting"
      ],
      "correctAnswerIndex": 3,
      "explanation": "Voting Ensembles are primarily divided into Hard Voting and Soft Voting methods."
    },
    {
      "id": 3,
      "questionText": "What does Hard Voting use to make the final prediction?",
      "options": [
        "Highest loss",
        "Majority class vote",
        "Average probabilities",
        "Gradient values"
      ],
      "correctAnswerIndex": 1,
      "explanation": "Hard Voting chooses the class that appears most frequently among model predictions."
    },
    {
      "id": 4,
      "questionText": "Soft Voting makes predictions based on:",
      "options": [
        "Averaging class probabilities",
        "Majority class votes",
        "Random selection",
        "Model with highest accuracy only"
      ],
      "correctAnswerIndex": 0,
      "explanation": "Soft Voting averages probabilities from all models and selects the class with the highest probability."
    },
    {
      "id": 5,
      "questionText": "Soft Voting requires that base models must:",
      "options": [
        "Have the same accuracy",
        "Output raw labels",
        "Output probability scores",
        "Be decision trees only"
      ],
      "correctAnswerIndex": 2,
      "explanation": "Soft Voting needs probability outputs like `predict_proba()` — not just class labels."
    },
    {
      "id": 6,
      "questionText": "What is the minimum number of models required for a Voting Ensemble?",
      "options": [
        "3",
        "1",
        "2",
        "No minimum"
      ],
      "correctAnswerIndex": 2,
      "explanation": "At least 2 models are required to perform any kind of voting."
    },
    {
      "id": 7,
      "questionText": "What is the purpose of using multiple models in Voting?",
      "options": [
        "To combine strengths of different models",
        "To reduce dataset size",
        "To increase bias",
        "To make training faster"
      ],
      "correctAnswerIndex": 0,
      "explanation": "Voting combines multiple models to leverage their strengths and improve prediction reliability."
    },
    {
      "id": 8,
      "questionText": "In Hard Voting, what happens if there is a tie between class predictions?",
      "options": [
        "First class is selected",
        "Depends on implementation",
        "Model with highest accuracy is selected",
        "Random class is selected"
      ],
      "correctAnswerIndex": 1,
      "explanation": "Tie handling is implementation-dependent and varies across libraries."
    },
    {
      "id": 9,
      "questionText": "Which Voting method performs better when base models are calibrated and output probabilities?",
      "options": [
        "Soft Voting",
        "Hard Voting",
        "Rule-based Voting",
        "Random Voting"
      ],
      "correctAnswerIndex": 0,
      "explanation": "Soft Voting uses probability averaging — works best with well-calibrated models."
    },
    {
      "id": 10,
      "questionText": "Which of the following is a key advantage of Voting over a single model?",
      "options": [
        "Requires less computation",
        "Better generalization",
        "Always 100% accuracy",
        "No need for tuning"
      ],
      "correctAnswerIndex": 1,
      "explanation": "Voting reduces the chance of overfitting and improves generalization performance."
    },
    {
      "id": 11,
      "questionText": "Which type of Voting is preferred when class probabilities are reliable?",
      "options": [
        "Bootstrap Voting",
        "Soft Voting",
        "Random Voting",
        "Hard Voting"
      ],
      "correctAnswerIndex": 1,
      "explanation": "Soft Voting utilizes probability outputs effectively when they are calibrated."
    },
    {
      "id": 12,
      "questionText": "What is a requirement for models in Soft Voting?",
      "options": [
        "All models must be trees",
        "All models must be neural networks",
        "All models must use same hyperparameters",
        "All models must output probability scores"
      ],
      "correctAnswerIndex": 3,
      "explanation": "Soft Voting needs probability scores using functions like `predict_proba()`."
    },
    {
      "id": 13,
      "questionText": "Which is true about Voting Ensembles?",
      "options": [
        "They are only used for regression",
        "They must use only identical models",
        "They reduce overfitting by aggregating independent models",
        "They eliminate the need for feature engineering"
      ],
      "correctAnswerIndex": 2,
      "explanation": "Voting reduces overfitting by combining diverse independent models."
    },
    {
      "id": 14,
      "questionText": "What type of models can be used inside a Voting Ensemble?",
      "options": [
        "Only decision trees",
        "Only SVM",
        "Any mix of models (heterogeneous)",
        "Only neural networks"
      ],
      "correctAnswerIndex": 2,
      "explanation": "Voting can combine diverse models like SVM, Logistic Regression, Decision Trees, etc."
    },
    {
      "id": 15,
      "questionText": "Which problem type is Voting Ensemble typically used for?",
      "options": [
        "Only regression",
        "Only classification",
        "Both classification and regression",
        "Only clustering"
      ],
      "correctAnswerIndex": 2,
      "explanation": "Voting is mainly used for classification, but can also be extended to regression."
    },
    {
      "id": 16,
      "questionText": "In Hard Voting, how is the final class decided?",
      "options": [
        "By averaging probabilities",
        "By selecting random model output",
        "By selecting highest confidence model",
        "By selecting majority voted class labels"
      ],
      "correctAnswerIndex": 3,
      "explanation": "Hard Voting simply picks the class label that gets majority votes."
    },
    {
      "id": 17,
      "questionText": "Which of the following is a limitation of Voting?",
      "options": [
        "Always requires GPUs",
        "Not interpretable easily",
        "Can be used only with CNNs",
        "Cannot handle classification tasks"
      ],
      "correctAnswerIndex": 1,
      "explanation": "Since multiple models are used, analyzing why a prediction was made becomes harder."
    },
    {
      "id": 18,
      "questionText": "Which is true for Hard Voting?",
      "options": [
        "Requires all models to be identical",
        "Needs only class labels",
        "Slower than Soft Voting",
        "Uses probabilities"
      ],
      "correctAnswerIndex": 1,
      "explanation": "Hard Voting only needs class labels like 0/1, not probabilities."
    },
    {
      "id": 19,
      "questionText": "Is it possible to combine Logistic Regression, SVM, and Random Forest in a Voting Ensemble?",
      "options": [
        "Only if dataset is small",
        "Yes, heterogeneous models are allowed",
        "Only if all are deep learning models",
        "No, all models must be same type"
      ],
      "correctAnswerIndex": 1,
      "explanation": "Voting allows combining different model families for better performance."
    },
    {
      "id": 20,
      "questionText": "What does Soft Voting average?",
      "options": [
        "Model parameters",
        "Raw model inputs",
        "Dataset rows",
        "Predicted class probabilities"
      ],
      "correctAnswerIndex": 3,
      "explanation": "Soft Voting averages probability outputs, then selects highest probability class."
    },
    {
      "id": 21,
      "questionText": "Which library provides VotingClassifier in Python?",
      "options": [
        "NumPy",
        "PyTorch",
        "scikit-learn",
        "TensorFlow"
      ],
      "correctAnswerIndex": 2,
      "explanation": "scikit-learn provides VotingClassifier for ensembling models."
    },
    {
      "id": 22,
      "questionText": "Which voting type is more robust against noisy class probability estimations?",
      "options": [
        "Hard Voting",
        "Soft Voting",
        "None",
        "Random Voting"
      ],
      "correctAnswerIndex": 0,
      "explanation": "Hard Voting is safer when probabilities are unreliable or poorly calibrated."
    },
    {
      "id": 23,
      "questionText": "Can Voting Ensembles improve stability of model predictions?",
      "options": [
        "Only for time series",
        "Yes, by reducing variance",
        "Only in regression",
        "No, increases randomness"
      ],
      "correctAnswerIndex": 1,
      "explanation": "Voting helps reduce variance by averaging multiple predictions."
    },
    {
      "id": 24,
      "questionText": "If models strongly disagree in Hard Voting, what happens?",
      "options": [
        "Soft Voting is automatically used",
        "Prediction becomes unstable",
        "Voting skips such cases",
        "It stops training"
      ],
      "correctAnswerIndex": 1,
      "explanation": "High disagreement can reduce prediction confidence and stability."
    },
    {
      "id": 25,
      "questionText": "What happens if one weak model is added to Soft Voting?",
      "options": [
        "No effect at all",
        "Causes overfitting immediately",
        "Always improves accuracy",
        "Can reduce overall performance"
      ],
      "correctAnswerIndex": 3,
      "explanation": "Soft Voting averages probabilities — a weak noisy model can hurt accuracy."
    },
    {
      "id": 26,
      "questionText": "What is the main difference between Bagging and Voting?",
      "options": [
        "Bagging is only for regression",
        "Voting always boosts performance, Bagging does not",
        "Voting uses different model types, Bagging uses same model type",
        "Voting needs large datasets only"
      ],
      "correctAnswerIndex": 2,
      "explanation": "Voting is heterogeneous by nature; Bagging generally uses the same model type with data bootstrapping."
    },
    {
      "id": 27,
      "questionText": "Which statement is true about Hard Voting?",
      "options": [
        "It uses average probability",
        "It trains models sequentially",
        "It requires all models to be deep learning models",
        "It selects the majority class label"
      ],
      "correctAnswerIndex": 3,
      "explanation": "Hard voting picks the class label that appears most frequently among model predictions."
    },
    {
      "id": 28,
      "questionText": "Soft Voting is more reliable than Hard Voting when:",
      "options": [
        "The dataset is extremely small",
        "Model probabilities are well-calibrated",
        "Using only one model",
        "Class labels are noisy"
      ],
      "correctAnswerIndex": 1,
      "explanation": "Soft voting performs best when model probability outputs are accurate."
    },
    {
      "id": 29,
      "questionText": "Which of the following is a key advantage of Soft Voting?",
      "options": [
        "Avoids probability calculations entirely",
        "Can weigh models differently using probabilities",
        "Only works with random forests",
        "Does not need probability estimates"
      ],
      "correctAnswerIndex": 1,
      "explanation": "Soft voting can assign more importance to stronger models using weighted averaging."
    },
    {
      "id": 30,
      "questionText": "Voting Ensemble works best when base models are:",
      "options": [
        "From the same algorithm",
        "Poorly trained",
        "Highly correlated",
        "Diverse and independent"
      ],
      "correctAnswerIndex": 3,
      "explanation": "Model diversity ensures different error patterns, improving ensemble performance."
    },
    {
      "id": 31,
      "questionText": "In Voting Ensemble, combining Logistic Regression, SVM, and Decision Tree is an example of:",
      "options": [
        "Bagging",
        "Sequential ensemble",
        "Heterogeneous ensemble",
        "Homogeneous ensemble"
      ],
      "correctAnswerIndex": 2,
      "explanation": "Using different types of models is called a heterogeneous ensemble."
    },
    {
      "id": 32,
      "questionText": "Which type of Voting allows assigning more importance to better-performing models?",
      "options": [
        "Uniform Voting",
        "Random Voting",
        "Hard Voting",
        "Soft Voting with weights"
      ],
      "correctAnswerIndex": 3,
      "explanation": "Soft voting supports weighting individual models based on performance."
    },
    {
      "id": 33,
      "questionText": "What happens if one model in a Voting Ensemble consistently gives wrong predictions?",
      "options": [
        "It fully controls the final output",
        "It stops the ensemble from working",
        "It improves accuracy",
        "It slightly decreases overall accuracy"
      ],
      "correctAnswerIndex": 3,
      "explanation": "A weak model can slightly hurt performance but often the ensemble still performs well."
    },
    {
      "id": 34,
      "questionText": "Which Voting method is more interpretable regarding final decision logic?",
      "options": [
        "Both equally",
        "None",
        "Hard Voting",
        "Soft Voting"
      ],
      "correctAnswerIndex": 2,
      "explanation": "Hard voting decisions can be directly traced to majority class votes."
    },
    {
      "id": 35,
      "questionText": "Soft Voting may underperform if:",
      "options": [
        "Models are shallow",
        "All models agree",
        "Models don't output probabilities",
        "Dataset is small"
      ],
      "correctAnswerIndex": 2,
      "explanation": "Soft voting requires probability outputs like predict_proba()."
    },
    {
      "id": 36,
      "questionText": "Which of these is a REAL requirement for Soft Voting?",
      "options": [
        "All models must be trees",
        "All models must be neural networks",
        "All models must output class probabilities",
        "All models must have same accuracy"
      ],
      "correctAnswerIndex": 2,
      "explanation": "Soft voting requires probability estimates — no need for same accuracy or model type."
    },
    {
      "id": 37,
      "questionText": "Voting Ensembles improve performance mainly by reducing:",
      "options": [
        "Bias",
        "Training time",
        "Variance",
        "Dataset size"
      ],
      "correctAnswerIndex": 2,
      "explanation": "Voting helps reduce the variance of predictions by averaging model outputs."
    },
    {
      "id": 38,
      "questionText": "Which is a potential DISADVANTAGE of Voting Ensembles?",
      "options": [
        "Cannot be used for classification",
        "Hard to interpret final decisions",
        "Must use same model type",
        "Only works for regression"
      ],
      "correctAnswerIndex": 1,
      "explanation": "Since multiple models influence the result, interpretability decreases."
    },
    {
      "id": 39,
      "questionText": "Voting Ensemble is MOST helpful when individual models:",
      "options": [
        "Have identical predictions",
        "Use the same algorithm and parameters",
        "Are all overfitted",
        "Make complementary errors"
      ],
      "correctAnswerIndex": 3,
      "explanation": "Ensemble works best when models compensate for each other's errors."
    },
    {
      "id": 40,
      "questionText": "Which scenario best fits using Voting Ensemble?",
      "options": [
        "Multiple trained models with decent accuracy",
        "Highly imbalanced dataset with no labels",
        "Real-time system with tight latency constraint",
        "Only one extremely accurate model"
      ],
      "correctAnswerIndex": 0,
      "explanation": "Voting is useful when several decent but imperfect models are available."
    },
    {
      "id": 41,
      "questionText": "Which of the following is TRUE about weighting in Soft Voting?",
      "options": [
        "Weights decrease ensemble accuracy always",
        "Weights are randomly assigned",
        "Weights are only used in Hard Voting",
        "Weights allow stronger models to influence more"
      ],
      "correctAnswerIndex": 3,
      "explanation": "Weights in Soft Voting help give preference to stronger models."
    },
    {
      "id": 42,
      "questionText": "Hard Voting is most effective when:",
      "options": [
        "Models produce random outputs",
        "Dataset is unsupervised",
        "Class labels from models are stable",
        "Probabilities are reliable"
      ],
      "correctAnswerIndex": 2,
      "explanation": "Hard voting is useful when class labels are confidently predicted."
    },
    {
      "id": 43,
      "questionText": "Which approach improves Soft Voting performance?",
      "options": [
        "Avoid probability averaging",
        "Use uncalibrated probability models",
        "Use calibrated probability models",
        "Remove probability outputs"
      ],
      "correctAnswerIndex": 2,
      "explanation": "Soft voting needs properly calibrated probability outputs for reliability."
    },
    {
      "id": 44,
      "questionText": "In voting, which models are preferred for maximum accuracy gain?",
      "options": [
        "Strong but diverse models",
        "Extremely similar models",
        "Very weak models only",
        "Highly correlated models"
      ],
      "correctAnswerIndex": 0,
      "explanation": "Diversity ensures different error patterns, maximizing ensemble accuracy."
    },
    {
      "id": 45,
      "questionText": "What is a potential RISK of including too many models in a Voting Ensemble?",
      "options": [
        "Loss of supervised learning",
        "Higher computation and latency",
        "Automatic model deletion",
        "Overfitting on test data"
      ],
      "correctAnswerIndex": 1,
      "explanation": "Too many models increase compute cost and response time."
    },
    {
      "id": 46,
      "questionText": "Which situation can DEGRADE Voting Ensemble performance?",
      "options": [
        "Using only calibrated probability models",
        "Adding several almost identical models",
        "Combining different feature extractors",
        "Adding multiple weak uncorrelated learners"
      ],
      "correctAnswerIndex": 1,
      "explanation": "Redundant identical models bring no diversity and give no benefit."
    },
    {
      "id": 47,
      "questionText": "Soft Voting is preferred over Hard Voting when:",
      "options": [
        "Only labels are needed",
        "No model supports probability output",
        "Probability outputs are reliable",
        "Models are identical"
      ],
      "correctAnswerIndex": 2,
      "explanation": "Soft voting requires accurate probability outputs for better performance."
    },
    {
      "id": 48,
      "questionText": "If accuracy of individual models is low but different mistakes are made, Voting can still:",
      "options": [
        "Stop working",
        "Always fail",
        "Outperform individual models",
        "Perform worse than all models"
      ],
      "correctAnswerIndex": 2,
      "explanation": "Ensemble effect combines different strengths, even if individual accuracy is modest."
    },
    {
      "id": 49,
      "questionText": "Which type of dataset benefits MOST from Voting Ensemble?",
      "options": [
        "Large and diverse structured data",
        "Purely unstructured images only",
        "Datasets with no labels",
        "Dataset with only one feature"
      ],
      "correctAnswerIndex": 0,
      "explanation": "Voting is very effective in structured tabular data problems."
    },
    {
      "id": 50,
      "questionText": "Which metric is NOT directly improved by Voting Ensemble?",
      "options": [
        "Robustness",
        "Stability",
        "Training speed",
        "Generalization"
      ],
      "correctAnswerIndex": 2,
      "explanation": "Voting increases computation; it does not accelerate training."
    },
    {
      "id": 51,
      "questionText": "Medium-Level: Soft Voting gives better performance over Hard Voting when:",
      "options": [
        "Model diversity does not exist",
        "Ensemble contains a single model",
        "Model probabilities are well calibrated",
        "Models only provide class labels"
      ],
      "correctAnswerIndex": 2,
      "explanation": "Soft voting leverages probability information, so accurate calibrated probabilities improve performance."
    },
    {
      "id": 52,
      "questionText": "Which strategy improves Voting Ensemble performance?",
      "options": [
        "Using only identical models",
        "Skipping data preprocessing",
        "Blending diverse model architectures",
        "Ignoring validation scores"
      ],
      "correctAnswerIndex": 2,
      "explanation": "Diversity among models boosts ensemble power significantly."
    },
    {
      "id": 53,
      "questionText": "Voting Ensemble can fail if base models are:",
      "options": [
        "Moderately accurate",
        "Trained on different features",
        "Diverse and independent",
        "Highly correlated with similar errors"
      ],
      "correctAnswerIndex": 3,
      "explanation": "If base models make similar errors, voting does not reduce errors effectively."
    },
    {
      "id": 54,
      "questionText": "Soft Voting with weights allows:",
      "options": [
        "Random selection of predictions",
        "Ignore weaker models completely",
        "Greater influence for stronger models",
        "Equal influence for all models"
      ],
      "correctAnswerIndex": 2,
      "explanation": "Weights in Soft Voting give more importance to better-performing models."
    },
    {
      "id": 55,
      "questionText": "Hard Voting is more robust when:",
      "options": [
        "The dataset is very large",
        "Individual model predictions are noisy",
        "There is only one base model",
        "Model probabilities are perfect"
      ],
      "correctAnswerIndex": 1,
      "explanation": "Hard Voting reduces sensitivity to probability estimation errors by using majority votes."
    },
    {
      "id": 56,
      "questionText": "Which scenario is best suited for a Voting Ensemble?",
      "options": [
        "Several moderately performing models with different strengths exist",
        "All models are identical",
        "Only one high-performing model is available",
        "Dataset has no labels"
      ],
      "correctAnswerIndex": 0,
      "explanation": "Voting leverages complementary predictions from multiple models to improve accuracy."
    },
    {
      "id": 57,
      "questionText": "When combining models in Voting, diversity helps to:",
      "options": [
        "Decrease overall variance",
        "Reduce dataset size",
        "Increase bias",
        "Accelerate training"
      ],
      "correctAnswerIndex": 0,
      "explanation": "Diverse models make different errors, which are averaged out, reducing variance."
    },
    {
      "id": 58,
      "questionText": "Adding very weak models to a Voting Ensemble can:",
      "options": [
        "Have no effect",
        "Always improve accuracy",
        "Slightly reduce overall performance",
        "Break the ensemble"
      ],
      "correctAnswerIndex": 2,
      "explanation": "Weak models may introduce noise and slightly lower ensemble accuracy."
    },
    {
      "id": 59,
      "questionText": "In a Voting Ensemble, tie-breaking in Hard Voting depends on:",
      "options": [
        "Number of features",
        "Probability outputs",
        "Implementation details",
        "Dataset size"
      ],
      "correctAnswerIndex": 2,
      "explanation": "Hard Voting tie-handling is usually implementation-specific."
    },
    {
      "id": 60,
      "questionText": "Voting Ensemble helps improve:",
      "options": [
        "Underfitting only",
        "Training speed",
        "Generalization and robustness",
        "Overfitting only"
      ],
      "correctAnswerIndex": 2,
      "explanation": "By combining multiple models, ensembles improve generalization and reduce sensitivity to noise."
    },
    {
      "id": 61,
      "questionText": "Which base models can be combined in a Voting Ensemble?",
      "options": [
        "Any heterogeneous or homogeneous models",
        "Only neural networks",
        "Only decision trees",
        "Only linear models"
      ],
      "correctAnswerIndex": 0,
      "explanation": "Voting allows combining different model types for better ensemble performance."
    },
    {
      "id": 62,
      "questionText": "Medium Level: If one base model in Soft Voting produces biased probabilities, the ensemble will:",
      "options": [
        "Ignore the model automatically",
        "Switch to Hard Voting",
        "Always fail",
        "Average out if others are unbiased"
      ],
      "correctAnswerIndex": 3,
      "explanation": "Soft Voting averages probabilities, so other unbiased models can compensate for one biased model."
    },
    {
      "id": 63,
      "questionText": "Scenario: You have three classifiers, two strong and one weak. Soft Voting is used. Which is true?",
      "options": [
        "Soft Voting ignores weak models",
        "Strong models have greater influence if weighted",
        "All models have equal effect regardless of performance",
        "Weak model dominates ensemble"
      ],
      "correctAnswerIndex": 1,
      "explanation": "Weighted Soft Voting allows strong models to have higher influence on the final prediction."
    },
    {
      "id": 64,
      "questionText": "Scenario: A Voting Ensemble has low diversity. Expected outcome?",
      "options": [
        "Significant increase in accuracy",
        "Low improvement over individual models",
        "High variance reduction",
        "Ensemble stops working"
      ],
      "correctAnswerIndex": 1,
      "explanation": "Without diversity, models make similar errors and the ensemble gains little benefit."
    },
    {
      "id": 65,
      "questionText": "Medium Level: How to handle class imbalance in Voting Ensemble?",
      "options": [
        "Ignore imbalance",
        "Remove minority class",
        "Use class weighting or balanced sampling in base models",
        "Only use Hard Voting"
      ],
      "correctAnswerIndex": 2,
      "explanation": "Adjusting base models to handle class imbalance ensures the ensemble is not biased."
    },
    {
      "id": 66,
      "questionText": "Scenario: Soft Voting probabilities differ in scale among models. Recommended step?",
      "options": [
        "Remove some models",
        "Use only Hard Voting",
        "Normalize probabilities before averaging",
        "Ignore scaling differences"
      ],
      "correctAnswerIndex": 2,
      "explanation": "Scaling ensures probabilities are comparable before combining in Soft Voting."
    },
    {
      "id": 67,
      "questionText": "Scenario: Voting Ensemble shows unstable predictions on edge cases. Likely reason?",
      "options": [
        "Using Hard Voting",
        "Training dataset too large",
        "Too many base models",
        "Insufficient diversity in base models"
      ],
      "correctAnswerIndex": 3,
      "explanation": "Lack of diversity leads to correlated errors, causing instability."
    },
    {
      "id": 68,
      "questionText": "Scenario: You want a lightweight Voting model for real-time use. Best practice?",
      "options": [
        "Ignore computation constraints",
        "Add more complex models",
        "Reduce number of base models and simplify them",
        "Use Soft Voting only"
      ],
      "correctAnswerIndex": 2,
      "explanation": "Fewer and simpler models reduce latency while maintaining reasonable ensemble performance."
    },
    {
      "id": 69,
      "questionText": "Scenario: Hard Voting ensemble has many ties. Recommended action?",
      "options": [
        "Remove base models",
        "Switch to Soft Voting or assign weights",
        "Keep Hard Voting as is",
        "Randomly select predictions"
      ],
      "correctAnswerIndex": 1,
      "explanation": "Soft Voting or weighting reduces the effect of ties and improves reliability."
    },
    {
      "id": 70,
      "questionText": "Scenario: You combine Logistic Regression, Random Forest, and SVM using Soft Voting. Test accuracy is lower than best base model. Possible causes?",
      "options": [
        "Soft Voting always underperforms",
        "Random initialization of models",
        "Improper probability calibration or correlated errors",
        "Training data too large"
      ],
      "correctAnswerIndex": 2,
      "explanation": "Soft Voting performance depends on probability calibration and diversity among base models."
    },
    {
      "id": 71,
      "questionText": "Hard Voting vs Soft Voting: Which is better for well-calibrated probabilistic outputs?",
      "options": [
        "Neither",
        "Both equal",
        "Soft Voting",
        "Hard Voting"
      ],
      "correctAnswerIndex": 2,
      "explanation": "Soft Voting leverages probability outputs and generally performs better with well-calibrated models."
    },
    {
      "id": 72,
      "questionText": "Scenario: One base model in a Voting Ensemble fails completely on new data. Ensemble effect?",
      "options": [
        "Hard Voting stops working",
        "Soft Voting averages can reduce impact",
        "Ensemble accuracy drops to zero",
        "All models ignored"
      ],
      "correctAnswerIndex": 1,
      "explanation": "Soft Voting can mitigate the effect of one failing model by averaging with other models' outputs."
    },
    {
      "id": 73,
      "questionText": "Scenario: Ensemble predictions fluctuate across runs. Likely cause?",
      "options": [
        "Multiple base models",
        "Random initialization of base models",
        "High dataset diversity",
        "Using Soft Voting"
      ],
      "correctAnswerIndex": 1,
      "explanation": "Randomness in training some base models can cause prediction fluctuations."
    },
    {
      "id": 74,
      "questionText": "Scenario: Weighted Soft Voting used incorrectly. What could happen?",
      "options": [
        "Hard Voting automatically applies",
        "Strong models underrepresented, ensemble underperforms",
        "Ensemble accuracy always increases",
        "Base models ignored"
      ],
      "correctAnswerIndex": 1,
      "explanation": "Incorrect weighting can reduce the influence of stronger models, decreasing ensemble performance."
    },
    {
      "id": 75,
      "questionText": "Scenario: Using Soft Voting with outputs on different scales. What to do?",
      "options": [
        "Randomly select one model",
        "Use Hard Voting",
        "Normalize probabilities",
        "Ignore scaling"
      ],
      "correctAnswerIndex": 2,
      "explanation": "Normalizing probabilities ensures a fair contribution from all models."
    },
    {
      "id": 76,
      "questionText": "Scenario: Ensemble overfits on training data. Recommended solution?",
      "options": [
        "Ignore ensemble and use single model",
        "Use cross-validation and consider reducing base models or regularizing them",
        "Switch to Hard Voting only",
        "Add more weak models"
      ],
      "correctAnswerIndex": 1,
      "explanation": "Proper cross-validation and controlling base model complexity help prevent overfitting."
    },
    {
      "id": 77,
      "questionText": "Scenario: Voting Ensemble performs worse than individual models. Likely reason?",
      "options": [
        "Voting always underperforms",
        "Dataset too large",
        "Using Soft Voting automatically fails",
        "Base models highly correlated or weak"
      ],
      "correctAnswerIndex": 3,
      "explanation": "Ensemble benefits only arise when base models are diverse and reasonably accurate."
    },
    {
      "id": 78,
      "questionText": "Scenario: Ensemble used for imbalanced classification. Which strategy helps?",
      "options": [
        "Ignore class imbalance",
        "Class weighting or balanced sampling in base models",
        "Remove minority classes",
        "Use Hard Voting only"
      ],
      "correctAnswerIndex": 1,
      "explanation": "Adjusting base models for imbalance ensures ensemble predictions are not biased."
    },
    {
      "id": 79,
      "questionText": "Scenario: Adding highly similar base models. Ensemble outcome?",
      "options": [
        "Maximum accuracy gain",
        "Soft Voting fails",
        "Little to no improvement",
        "Hard Voting fails"
      ],
      "correctAnswerIndex": 2,
      "explanation": "Similar models add redundancy, giving minimal benefit to the ensemble."
    },
    {
      "id": 80,
      "questionText": "Scenario: Ensemble shows different accuracy across runs. Most likely reason?",
      "options": [
        "Hard Voting always fluctuates",
        "Soft Voting inherently unstable",
        "Randomness in training base models",
        "Dataset size too small"
      ],
      "correctAnswerIndex": 2,
      "explanation": "Random initialization or stochastic training of base models causes variance across runs."
    },
    {
      "id": 81,
      "questionText": "Scenario: You need an ensemble for high-risk decisions where errors are costly. Best approach?",
      "options": [
        "Random Voting",
        "Hard Voting with weak models",
        "Weighted Soft Voting with strong base models",
        "Single uncalibrated model"
      ],
      "correctAnswerIndex": 2,
      "explanation": "Weighted Soft Voting emphasizes reliable models and reduces error in critical decisions."
    },
    {
      "id": 82,
      "questionText": "Scenario: You combine models trained on overlapping features. Risk?",
      "options": [
        "Soft Voting ignored",
        "Ensemble fails to produce output",
        "Highly correlated errors, reduced ensemble benefit",
        "Maximum ensemble improvement"
      ],
      "correctAnswerIndex": 2,
      "explanation": "Correlated errors reduce the variance reduction benefit of the ensemble."
    },
    {
      "id": 83,
      "questionText": "Scenario: One model outputs extreme probabilities. Soft Voting effect?",
      "options": [
        "Hard Voting preferred",
        "May skew average unless weights or normalization applied",
        "Has no effect",
        "Automatically corrected"
      ],
      "correctAnswerIndex": 1,
      "explanation": "Extreme probabilities can dominate averaging; normalization or weights correct this."
    },
    {
      "id": 84,
      "questionText": "Scenario: Voting Ensemble uses both linear and nonlinear models. Expected benefit?",
      "options": [
        "Soft Voting ignored",
        "No benefit, models cancel each other",
        "Capture complex patterns better than single model type",
        "Ensemble fails"
      ],
      "correctAnswerIndex": 2,
      "explanation": "Heterogeneous ensembles leverage strengths of diverse models."
    },
    {
      "id": 85,
      "questionText": "Scenario: You observe overfitting in ensemble predictions. Recommended step?",
      "options": [
        "Regularize base models and limit number of learners",
        "Switch from Soft to Hard Voting",
        "Ignore and keep current setup",
        "Add more weak base models"
      ],
      "correctAnswerIndex": 0,
      "explanation": "Controlling base model complexity reduces overfitting risk in the ensemble."
    },
    {
      "id": 86,
      "questionText": "Scenario: Voting Ensemble applied on noisy data. Best choice?",
      "options": [
        "Use only one model",
        "Hard Voting may be more robust to noisy predictions",
        "Ignore noise",
        "Soft Voting always better"
      ],
      "correctAnswerIndex": 1,
      "explanation": "Hard Voting reduces the effect of extreme probability fluctuations caused by noise."
    },
    {
      "id": 87,
      "questionText": "Scenario: You need explainability for the ensemble decision. Best choice?",
      "options": [
        "Single black-box model",
        "Hard Voting with traceable majority votes",
        "Soft Voting with unweighted averaging",
        "Use random models"
      ],
      "correctAnswerIndex": 1,
      "explanation": "Hard Voting provides clear insight into which class won the majority vote."
    },
    {
      "id": 88,
      "questionText": "Scenario: You want to minimize latency in ensemble inference. Recommended?",
      "options": [
        "Randomize predictions",
        "Increase base models and use Soft Voting",
        "Reduce number and complexity of base models",
        "Always use Deep Neural Networks"
      ],
      "correctAnswerIndex": 2,
      "explanation": "Fewer and simpler models decrease computation and latency while maintaining reasonable accuracy."
    },
    {
      "id": 89,
      "questionText": "Scenario: Base models trained with different feature subsets. Expected benefit?",
      "options": [
        "Reduced diversity",
        "Ensemble ignored",
        "Increased diversity and ensemble robustness",
        "Soft Voting fails"
      ],
      "correctAnswerIndex": 2,
      "explanation": "Different feature subsets create diverse predictions, improving ensemble performance."
    },
    {
      "id": 90,
      "questionText": "Scenario: You combine models with complementary strengths. Outcome?",
      "options": [
        "Enhanced performance compared to any single model",
        "Ensemble fails",
        "Performance drops",
        "Only Hard Voting works"
      ],
      "correctAnswerIndex": 0,
      "explanation": "Combining complementary models allows the ensemble to cover weaknesses of individual models."
    },
    {
      "id": 91,
      "questionText": "Scenario: Ensemble shows slightly worse performance than best base model. Reason?",
      "options": [
        "Hard Voting ignored",
        "Models may be too correlated or weakly performing",
        "Voting always reduces performance",
        "Dataset too large"
      ],
      "correctAnswerIndex": 1,
      "explanation": "High correlation or weak base models can reduce ensemble benefit."
    },
    {
      "id": 92,
      "questionText": "Scenario: You want maximum interpretability with moderate performance. Best option?",
      "options": [
        "Random ensemble",
        "Hard Voting with simple base models",
        "Weighted Soft Voting with complex models",
        "Single complex model"
      ],
      "correctAnswerIndex": 1,
      "explanation": "Hard Voting with simple models is easier to interpret while maintaining decent performance."
    },
    {
      "id": 93,
      "questionText": "Scenario: Ensemble prediction differs from all base models. Possible reason?",
      "options": [
        "Hard Voting tie occurs",
        "Impossible in Voting",
        "Error in data preprocessing",
        "Soft Voting probability averaging can yield a class different from all individual predictions"
      ],
      "correctAnswerIndex": 3,
      "explanation": "Soft Voting averages probabilities, which may shift the final predicted class."
    },
    {
      "id": 94,
      "questionText": "Scenario: Using ensemble for critical medical diagnosis. Preferred setup?",
      "options": [
        "Hard Voting with weak models",
        "Single uncalibrated model",
        "Random Voting",
        "Weighted Soft Voting with calibrated models"
      ],
      "correctAnswerIndex": 3,
      "explanation": "Weighted Soft Voting ensures reliable models dominate the prediction, improving accuracy for high-stakes tasks."
    },
    {
      "id": 95,
      "questionText": "Scenario: Ensemble uses multiple similar trees. Soft Voting vs Hard Voting?",
      "options": [
        "Soft Voting always better",
        "Hard Voting fails",
        "Ensemble ignored",
        "Little difference since models are correlated"
      ],
      "correctAnswerIndex": 3,
      "explanation": "Highly correlated models provide minimal ensemble improvement, regardless of voting type."
    },
    {
      "id": 96,
      "questionText": "Scenario: You want to balance accuracy and latency. Recommendation?",
      "options": [
        "Reduce base models and consider simpler learners",
        "Always Soft Voting",
        "Ignore latency",
        "Use all available models regardless of size"
      ],
      "correctAnswerIndex": 0,
      "explanation": "Fewer and simpler models reduce latency while maintaining reasonable accuracy."
    },
    {
      "id": 97,
      "questionText": "Scenario: Ensemble uses probabilistic outputs from calibrated models. Expected outcome?",
      "options": [
        "Soft Voting fails",
        "Improved prediction reliability using Soft Voting",
        "Hard Voting fails",
        "No difference"
      ],
      "correctAnswerIndex": 1,
      "explanation": "Calibrated probabilities improve the effectiveness of Soft Voting."
    },
    {
      "id": 98,
      "questionText": "Scenario: Base models vary greatly in accuracy. Best Voting strategy?",
      "options": [
        "Hard Voting ignoring weights",
        "Weighted Soft Voting to emphasize stronger models",
        "Random selection",
        "Remove weaker models"
      ],
      "correctAnswerIndex": 1,
      "explanation": "Weighted Soft Voting allows stronger models to have more influence on the final prediction."
    },
    {
      "id": 99,
      "questionText": "Scenario: Ensemble shows high variance in predictions. Possible solution?",
      "options": [
        "Reduce number of base models",
        "Increase diversity among base models",
        "Switch to single model",
        "Use uncalibrated probabilities"
      ],
      "correctAnswerIndex": 1,
      "explanation": "Greater model diversity reduces correlated errors and stabilizes ensemble predictions."
    },
    {
      "id": 100,
      "questionText": "Scenario: You combine heterogeneous models using Voting Ensemble. Goal achieved?",
      "options": [
        "Hard Voting fails",
        "Improved generalization and robustness over individual models",
        "Soft Voting ignored",
        "Reduced accuracy"
      ],
      "correctAnswerIndex": 1,
      "explanation": "Heterogeneous ensembles leverage complementary strengths, improving generalization and robustness."
    }
  ]
}