{"id":2235,"date":"2025-09-19T04:14:48","date_gmt":"2025-09-19T03:14:48","guid":{"rendered":"https:\/\/cyberenlightener.com\/?page_id=2235"},"modified":"2025-10-26T09:39:42","modified_gmt":"2025-10-26T09:39:42","slug":"ensemble-learning","status":"publish","type":"page","link":"https:\/\/cyberenlightener.com\/?page_id=2235","title":{"rendered":"Ensemble learning, Class imbalance(SMOTE &amp; other techniques), Hyperparameter tuning"},"content":{"rendered":"\n<h1 class=\"wp-block-heading\">Introduction to Ensemble Methods<\/h1>\n\n\n\n<p>Ensemble learning refers to a family of techniques where multiple models are combined to build a more powerful and accurate predictor. Well-known ensemble approaches include <strong>bagging<\/strong>, <strong>boosting<\/strong>, and <strong>random forests<\/strong>.<\/p>\n\n\n\n<p>The central idea is to train a collection of models, often called <em>base classifiers<\/em> (M\u2081, M\u2082, \u2026, M\u2096), and then merge their outputs to form a stronger <em>meta-classifier<\/em> (M*). Instead of relying on a single learning model, the ensemble leverages the collective wisdom of many models.<\/p>\n\n\n\n<p>Given a dataset <strong>D<\/strong>, we generate <strong>k<\/strong> different training subsets (D\u2081, D\u2082, \u2026, D\u2096). Each subset is used to build one base classifier. When a new data point needs to be classified, each model provides its own prediction (a \u201cvote\u201d). The ensemble aggregates these predictions\u2014often through majority voting\u2014and produces the final decision.<\/p>\n\n\n\n<p>Ensembles typically outperform individual models. For example, if the ensemble relies on majority voting, it will only misclassify a data point if more than half of its base classifiers are wrong. Thus, even when some models make mistakes, the combined decision can still be correct.<\/p>\n\n\n\n<p>The strength of an ensemble lies in <strong>diversity<\/strong>\u2014the models should not all make the same types of errors. Low correlation among base classifiers ensures that their mistakes are less likely to overlap. Importantly, each model should perform better than random guessing to contribute meaningfully. Another advantage is that ensemble methods can be parallelized, since each classifier can be trained independently on different processors.<\/p>\n\n\n\n<p>To illustrate, consider a two-class problem with attributes <em>x\u2081<\/em> and <em>x\u2082<\/em>. A single decision tree might generate a rough, piecewise-constant decision boundary. However, when we combine many such trees into an ensemble, the resulting decision surface becomes more refined and closer to the true boundary\u2014demonstrating the increased accuracy of ensembles.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h1 class=\"wp-block-heading\">Bagging: Bootstrap Aggregation<\/h1>\n\n\n\n<p>Bagging is one of the simplest yet most effective ensemble techniques. Its goal is to improve prediction accuracy by reducing variance in the models.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Algorithm: Bagging<\/h3>\n\n\n\n<p><strong>Input:<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>D, a dataset of <em>d<\/em> training examples<\/li>\n\n\n\n<li>k, the number of models to be combined<\/li>\n\n\n\n<li>A base learning algorithm (e.g., decision trees, na\u00efve Bayes, etc.)<\/li>\n<\/ul>\n\n\n\n<p><strong>Process:<\/strong><\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>For i = 1 to k:<br>a. Generate a bootstrap sample D\u1d62 by sampling from D <em>with replacement<\/em>.<br>b. Train a classifier M\u1d62 on D\u1d62 using the chosen learning algorithm.<\/li>\n\n\n\n<li>To classify a new instance X:\n<ul class=\"wp-block-list\">\n<li>Let each M\u1d62 predict the class of X.<\/li>\n\n\n\n<li>Output the class label with the majority vote.<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n\n\n\n<p>This method is called <em>bootstrap aggregation<\/em> because each subset is a bootstrap sample. Since sampling is with replacement, some data points from D may appear multiple times in D\u1d62, while others may be excluded. <\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Why Bagging Works?<\/h3>\n\n\n\n<p>Bagging improves robustness, especially against noisy data and overfitting. By averaging across many classifiers, it reduces variance without substantially increasing bias.<\/p>\n\n\n\n<p>A simple analogy is medical diagnosis: instead of asking just one doctor, you consult many doctors. Each gives a possible diagnosis, and the final decision is made based on the majority opinion. This aggregated decision is usually more reliable than relying on a single opinion.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Regression Case<\/h3>\n\n\n\n<p>When applied to regression, bagging predicts continuous values by averaging the outputs of the base models rather than voting.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Advantages<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Improves accuracy compared to a single classifier.<\/li>\n\n\n\n<li>Rarely performs worse than the base model.<\/li>\n\n\n\n<li>Works well with high-variance models like decision trees.<\/li>\n<\/ul>\n\n\n\n<p>In practice, bagging often delivers a substantial performance boost and forms the basis of more advanced ensemble methods such as <strong>random forests<\/strong>.<\/p>\n\n\n\n<h5 class=\"wp-block-heading\"><strong>Boosting and AdaBoost<\/strong><\/h5>\n\n\n\n<p>Boosting is another popular ensemble learning technique that improves accuracy by assigning different weights to classifiers and training samples. Imagine consulting multiple doctors for a diagnosis\u2014not only do you gather their opinions, but you also give more importance (weight) to the doctors who have been more accurate in the past. Similarly, in boosting, classifiers with better performance get stronger influence in the final decision.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">How Boosting Works<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Each training instance is given a weight.<\/li>\n\n\n\n<li>A sequence of classifiers is trained iteratively. After each classifier <strong>M\u1d62<\/strong> is built, the weights of the training data are adjusted:\n<ul class=\"wp-block-list\">\n<li>Misclassified samples get <strong>higher weights<\/strong> so that the next classifier pays more attention to them.<\/li>\n\n\n\n<li>Correctly classified samples get <strong>lower weights<\/strong>.<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li>The final ensemble model combines all classifiers, but instead of equal voting (like bagging), each classifier\u2019s vote is weighted by its accuracy.<\/li>\n<\/ul>\n\n\n\n<p>This way, boosting builds a set of models that complement one another\u2014later classifiers focus more on the harder cases that earlier ones got wrong.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h3 class=\"wp-block-heading\">AdaBoost (Adaptive Boosting)<\/h3>\n\n\n\n<p><strong>AdaBoost<\/strong> is the most widely used boosting algorithm. It starts by giving equal weight (1\/d) to all training tuples. Then it proceeds for <em>k<\/em> rounds:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Sampling:<\/strong> In each round, a training set D\u1d62 of size <em>d<\/em> is created by sampling with replacement. The chance of a tuple being selected depends on its weight.<\/li>\n\n\n\n<li><strong>Training:<\/strong> A classifier M\u1d62 is trained on D\u1d62.<\/li>\n\n\n\n<li><strong>Error Calculation:<\/strong> The error of M\u1d62 is measured as the total weight of misclassified samples. If error &gt; 0.5, the classifier is discarded.<\/li>\n\n\n\n<li><strong>Weight Update:<\/strong>\n<ul class=\"wp-block-list\">\n<li>Misclassified tuples get higher weights.<\/li>\n\n\n\n<li>Correctly classified tuples get lower weights.<\/li>\n\n\n\n<li>Weights are normalized so they always sum to 1.<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n\n\n\n<p>This ensures that difficult-to-classify tuples receive more attention in the next round.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h3 class=\"wp-block-heading\">Weighted Voting in AdaBoost<\/h3>\n\n\n\n<p>When classifying a new instance X:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Each classifier M\u1d62 contributes a weighted vote.<\/li>\n\n\n\n<li>The weight is based on its accuracy and computed as:<\/li>\n<\/ul>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"239\" height=\"70\" src=\"https:\/\/cyberenlightener.com\/wp-content\/uploads\/2025\/09\/image-1.png\" alt=\"\" class=\"wp-image-2239\"\/><\/figure>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"156\" height=\"65\" src=\"https:\/\/cyberenlightener.com\/wp-content\/uploads\/2025\/09\/image-2.png\" alt=\"\" class=\"wp-image-2240\"\/><\/figure>\n\n\n\n<ul class=\"wp-block-list\">\n<li>The class label with the highest total weighted vote across all classifiers is chosen as the final prediction.<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h3 class=\"wp-block-heading\">Bagging vs. Boosting<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Bagging<\/strong> treats all classifiers equally and reduces variance by averaging votes.<\/li>\n\n\n\n<li><strong>Boosting<\/strong> gives higher weight to more accurate classifiers and focuses on reducing both bias and variance by concentrating on difficult training samples.<\/li>\n<\/ul>\n\n\n\n<p>As a result, boosting often delivers higher accuracy than bagging, though it can be more sensitive to noisy data.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Adaboost<\/strong><\/h2>\n\n\n\n<div data-wp-interactive=\"core\/file\" class=\"wp-block-file\"><object data-wp-bind--hidden=\"!state.hasPdfPreview\" hidden class=\"wp-block-file__embed\" data=\"https:\/\/cyberenlightener.com\/wp-content\/uploads\/2025\/09\/Adaboost-ssroy.pdf\" type=\"application\/pdf\" style=\"width:100%;height:600px\" aria-label=\"Embed of Adaboost-ssroy.\"><\/object><a id=\"wp-block-file--media-3250ee83-785a-4229-9ff8-ee7bab468a57\" href=\"https:\/\/cyberenlightener.com\/wp-content\/uploads\/2025\/09\/Adaboost-ssroy.pdf\">Adaboost-ssroy<\/a><a href=\"https:\/\/cyberenlightener.com\/wp-content\/uploads\/2025\/09\/Adaboost-ssroy.pdf\" class=\"wp-block-file__button wp-element-button\" download aria-describedby=\"wp-block-file--media-3250ee83-785a-4229-9ff8-ee7bab468a57\">Download<\/a><\/div>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"398\" height=\"144\" src=\"https:\/\/cyberenlightener.com\/wp-content\/uploads\/2025\/09\/image-3.png\" alt=\"\" class=\"wp-image-2243\" srcset=\"https:\/\/cyberenlightener.com\/wp-content\/uploads\/2025\/09\/image-3.png 398w, https:\/\/cyberenlightener.com\/wp-content\/uploads\/2025\/09\/image-3-300x109.png 300w\" sizes=\"auto, (max-width: 398px) 100vw, 398px\" \/><\/figure>\n\n\n\n<div data-wp-interactive=\"core\/file\" class=\"wp-block-file\"><object data-wp-bind--hidden=\"!state.hasPdfPreview\" hidden class=\"wp-block-file__embed\" data=\"https:\/\/cyberenlightener.com\/wp-content\/uploads\/2025\/09\/Adaboost-CATEGORICAL-ssroy.pdf\" type=\"application\/pdf\" style=\"width:100%;height:600px\" aria-label=\"Embed of Adaboost-CATEGORICAL-ssroy.\"><\/object><a id=\"wp-block-file--media-c5c9c0a1-a8b3-420b-afc9-14e90977ac94\" href=\"https:\/\/cyberenlightener.com\/wp-content\/uploads\/2025\/09\/Adaboost-CATEGORICAL-ssroy.pdf\">Adaboost-CATEGORICAL-ssroy<\/a><a href=\"https:\/\/cyberenlightener.com\/wp-content\/uploads\/2025\/09\/Adaboost-CATEGORICAL-ssroy.pdf\" class=\"wp-block-file__button wp-element-button\" download aria-describedby=\"wp-block-file--media-c5c9c0a1-a8b3-420b-afc9-14e90977ac94\">Download<\/a><\/div>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Adaboost Numerical<\/strong><\/h2>\n\n\n\n<figure class=\"wp-block-image size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"771\" height=\"1024\" src=\"https:\/\/cyberenlightener.com\/wp-content\/uploads\/2025\/09\/ADABOOST1SSROY-771x1024.jpeg\" alt=\"\" class=\"wp-image-2246\" style=\"width:1178px;height:auto\" srcset=\"https:\/\/cyberenlightener.com\/wp-content\/uploads\/2025\/09\/ADABOOST1SSROY-771x1024.jpeg 771w, https:\/\/cyberenlightener.com\/wp-content\/uploads\/2025\/09\/ADABOOST1SSROY-226x300.jpeg 226w, https:\/\/cyberenlightener.com\/wp-content\/uploads\/2025\/09\/ADABOOST1SSROY-768x1021.jpeg 768w, https:\/\/cyberenlightener.com\/wp-content\/uploads\/2025\/09\/ADABOOST1SSROY-1156x1536.jpeg 1156w, https:\/\/cyberenlightener.com\/wp-content\/uploads\/2025\/09\/ADABOOST1SSROY.jpeg 1204w\" sizes=\"auto, (max-width: 771px) 100vw, 771px\" \/><\/figure>\n\n\n\n<figure class=\"wp-block-image size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"757\" height=\"1024\" src=\"https:\/\/cyberenlightener.com\/wp-content\/uploads\/2025\/09\/2ADABOOSTSSROY-757x1024.jpeg\" alt=\"\" class=\"wp-image-2247\" style=\"width:1178px;height:auto\" srcset=\"https:\/\/cyberenlightener.com\/wp-content\/uploads\/2025\/09\/2ADABOOSTSSROY-757x1024.jpeg 757w, https:\/\/cyberenlightener.com\/wp-content\/uploads\/2025\/09\/2ADABOOSTSSROY-222x300.jpeg 222w, https:\/\/cyberenlightener.com\/wp-content\/uploads\/2025\/09\/2ADABOOSTSSROY-768x1039.jpeg 768w, https:\/\/cyberenlightener.com\/wp-content\/uploads\/2025\/09\/2ADABOOSTSSROY-1136x1536.jpeg 1136w, https:\/\/cyberenlightener.com\/wp-content\/uploads\/2025\/09\/2ADABOOSTSSROY.jpeg 1183w\" sizes=\"auto, (max-width: 757px) 100vw, 757px\" \/><\/figure>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"637\" src=\"https:\/\/cyberenlightener.com\/wp-content\/uploads\/2025\/09\/3ADABOOST-1024x637.jpeg\" alt=\"\" class=\"wp-image-2248\" srcset=\"https:\/\/cyberenlightener.com\/wp-content\/uploads\/2025\/09\/3ADABOOST-1024x637.jpeg 1024w, https:\/\/cyberenlightener.com\/wp-content\/uploads\/2025\/09\/3ADABOOST-300x187.jpeg 300w, https:\/\/cyberenlightener.com\/wp-content\/uploads\/2025\/09\/3ADABOOST-768x478.jpeg 768w, https:\/\/cyberenlightener.com\/wp-content\/uploads\/2025\/09\/3ADABOOST.jpeg 1499w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<div data-wp-interactive=\"core\/file\" class=\"wp-block-file\"><object data-wp-bind--hidden=\"!state.hasPdfPreview\" hidden class=\"wp-block-file__embed\" data=\"https:\/\/cyberenlightener.com\/wp-content\/uploads\/2025\/09\/Adaboost-ssroy1.pdf\" type=\"application\/pdf\" style=\"width:100%;height:600px\" aria-label=\"Embed of Adaboost-ssroy1.\"><\/object><a id=\"wp-block-file--media-d7e50c3c-fa62-4439-928f-db4bdd385a72\" href=\"https:\/\/cyberenlightener.com\/wp-content\/uploads\/2025\/09\/Adaboost-ssroy1.pdf\">Adaboost-ssroy1<\/a><a href=\"https:\/\/cyberenlightener.com\/wp-content\/uploads\/2025\/09\/Adaboost-ssroy1.pdf\" class=\"wp-block-file__button wp-element-button\" download aria-describedby=\"wp-block-file--media-d7e50c3c-fa62-4439-928f-db4bdd385a72\">Download<\/a><\/div>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Class imbalance<\/strong><\/h2>\n\n\n\n<div data-wp-interactive=\"core\/file\" class=\"wp-block-file\"><object data-wp-bind--hidden=\"!state.hasPdfPreview\" hidden class=\"wp-block-file__embed\" data=\"https:\/\/cyberenlightener.com\/wp-content\/uploads\/2025\/10\/Sampling-Methods-for-Imbalance-1-1-1.pdf\" type=\"application\/pdf\" style=\"width:100%;height:600px\" aria-label=\"Embed of Sampling-Methods-for-Imbalance-1-1 (1).\"><\/object><a id=\"wp-block-file--media-bda7e455-7ea4-440d-b3f7-f73c3123695a\" href=\"https:\/\/cyberenlightener.com\/wp-content\/uploads\/2025\/10\/Sampling-Methods-for-Imbalance-1-1-1.pdf\">Sampling-Methods-for-Imbalance-1-1 (1)<\/a><a href=\"https:\/\/cyberenlightener.com\/wp-content\/uploads\/2025\/10\/Sampling-Methods-for-Imbalance-1-1-1.pdf\" class=\"wp-block-file__button wp-element-button\" download aria-describedby=\"wp-block-file--media-bda7e455-7ea4-440d-b3f7-f73c3123695a\">Download<\/a><\/div>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Model Tunning<\/strong><\/h2>\n\n\n\n<div data-wp-interactive=\"core\/file\" class=\"wp-block-file\"><object data-wp-bind--hidden=\"!state.hasPdfPreview\" hidden class=\"wp-block-file__embed\" data=\"https:\/\/cyberenlightener.com\/wp-content\/uploads\/2025\/10\/ClassImbalancematrial_provided_by_ProfRoy.pdf\" type=\"application\/pdf\" style=\"width:100%;height:600px\" aria-label=\"Embed of ClassImbalancematrial_provided_by_ProfRoy.\"><\/object><a id=\"wp-block-file--media-2092ccd1-e5c0-4266-8066-e24d25f1a8fc\" href=\"https:\/\/cyberenlightener.com\/wp-content\/uploads\/2025\/10\/ClassImbalancematrial_provided_by_ProfRoy.pdf\">ClassImbalancematrial_provided_by_ProfRoy<\/a><a href=\"https:\/\/cyberenlightener.com\/wp-content\/uploads\/2025\/10\/ClassImbalancematrial_provided_by_ProfRoy.pdf\" class=\"wp-block-file__button wp-element-button\" download aria-describedby=\"wp-block-file--media-2092ccd1-e5c0-4266-8066-e24d25f1a8fc\">Download<\/a><\/div>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Hyperparameter tuning<\/strong><\/h2>\n\n\n\n<div data-wp-interactive=\"core\/file\" class=\"wp-block-file\"><object data-wp-bind--hidden=\"!state.hasPdfPreview\" hidden class=\"wp-block-file__embed\" data=\"https:\/\/cyberenlightener.com\/wp-content\/uploads\/2025\/10\/Hyperparameter-optimization.pdf\" type=\"application\/pdf\" style=\"width:100%;height:600px\" aria-label=\"Embed of Hyperparameter optimization.\"><\/object><a id=\"wp-block-file--media-c3d58a50-1ece-4c02-bb3d-9c8e127ba148\" href=\"https:\/\/cyberenlightener.com\/wp-content\/uploads\/2025\/10\/Hyperparameter-optimization.pdf\">Hyperparameter optimization<\/a><a href=\"https:\/\/cyberenlightener.com\/wp-content\/uploads\/2025\/10\/Hyperparameter-optimization.pdf\" class=\"wp-block-file__button wp-element-button\" download aria-describedby=\"wp-block-file--media-c3d58a50-1ece-4c02-bb3d-9c8e127ba148\">Download<\/a><\/div>\n","protected":false},"excerpt":{"rendered":"<p>Introduction to Ensemble Methods Ensemble learning refers to a family of techniques where multiple models are combined to build a more powerful and accurate predictor. Well-known ensemble approaches include bagging, boosting, and random forests. The central idea is to train a collection of models, often called base classifiers (M\u2081, M\u2082, \u2026, M\u2096), and then merge [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"default","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","ast-disable-related-posts":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"class_list":["post-2235","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/cyberenlightener.com\/index.php?rest_route=\/wp\/v2\/pages\/2235","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/cyberenlightener.com\/index.php?rest_route=\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/cyberenlightener.com\/index.php?rest_route=\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/cyberenlightener.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/cyberenlightener.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=2235"}],"version-history":[{"count":6,"href":"https:\/\/cyberenlightener.com\/index.php?rest_route=\/wp\/v2\/pages\/2235\/revisions"}],"predecessor-version":[{"id":2346,"href":"https:\/\/cyberenlightener.com\/index.php?rest_route=\/wp\/v2\/pages\/2235\/revisions\/2346"}],"wp:attachment":[{"href":"https:\/\/cyberenlightener.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=2235"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}