{"id":3760,"date":"2024-09-25T16:10:04","date_gmt":"2024-09-25T16:10:04","guid":{"rendered":"https:\/\/wp.sigmod.org\/?p=3760"},"modified":"2024-09-25T16:10:06","modified_gmt":"2024-09-25T16:10:06","slug":"auditing-bias-of-recourse-in-classifiers-part-ii","status":"publish","type":"post","link":"https:\/\/wp.sigmod.org\/?p=3760","title":{"rendered":"Auditing bias of recourse in classifiers (part II)\u00a0"},"content":{"rendered":"\n<h2 class=\"wp-block-heading\">Introduction\u00a0<\/h2>\n\n\n\n<p>Fairness is a fundamental principle reflecting our innate sense of justice and equity. The essence of fairness lies in the equitable, unbiased and just treatment of all individuals.\u00a0 In our previous post (<a href=\"https:\/\/wp.sigmod.org\/?p=3726\" target=\"_blank\" rel=\"noreferrer noopener\">part I<\/a>), we provided an introduction to the <em>bias of recourse <\/em>problem. In this post (part II), we describe our framework for defining and auditing bias of recourse on subgroups comprising a series of fairness definitions as well as an efficient auditing mechanism utilizing them to audit fairness of recourse.\u00a0<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Fairness-Aware Counterfactuals for Subgroups\u00a0<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Preliminaries\u00a0<\/h3>\n\n\n\n<p>We start by providing a set of formal definitions capturing different aspects of fairness of recourse.&nbsp;&nbsp;<\/p>\n\n\n\n<p>We consider a <strong>feature space <\/strong><em>X <\/em>= <em>X<\/em><sub>1<\/sub> , \u2026. , <em>X<sub>n<\/sub><\/em>, where <em>X<sub>n<\/sub> <\/em>denotes the <strong>protected feature<\/strong>, which, for ease of presentation, takes two protected values {0<em>, <\/em>1}. For an instance <em>x<\/em> \u2208 <em>X<\/em>, we use the notation <em>x.X<sub>i<\/sub> <\/em>to refer to its value in feature <em>X<sub>i<\/sub><\/em>.\u00a0<\/p>\n\n\n\n<p>We consider a binary <strong>classifier <\/strong><em>h <\/em>: <em>X <\/em>\u27f6<em> <\/em>{<em>&#8211;<\/em>1<em>, <\/em>1} where the positive outcome is the favorable one. For a given <em>h<\/em>, we are concerned with a dataset <em>D <\/em>of <strong>adversely affected individuals<\/strong>, i.e., those who receive the unfavorable outcome. We prefer the term instance to refer to any point in the feature space <em>X<\/em>, and the term individual to refer to an instance from the dataset <em>D<\/em>.\u00a0<\/p>\n\n\n\n<p>We define an action<strong> <\/strong><em>a <\/em>as a set of changes to feature values, e.g., <em>a <\/em>= {<em>country \u27f6<\/em><em> US, education-num \u27f6<\/em><em> <\/em>12}. We denote as <em>A <\/em>the set of possible actions. An action <em>a <\/em>applied to an individual (a factual instance) <em>x returns<\/em> a <strong>counterfactual <\/strong>instance <em>x<\/em>\u2019 = <em>a<\/em>(<em>x<\/em>). If the individual <em>x <\/em>was adversely affected (<em>h<\/em>(<em>x<\/em>) = -1) and the action results in a counterfactual that receives the desired outcome (<em>h<\/em>(<em>a<\/em>(<em>x<\/em>)) = 1), we say that action <em>a <\/em>offers recourse to the individual <em>x <\/em>and is thus <strong>effective<\/strong>. In line with the literature, we also refer to an effective action as a <strong>counterfactual explanation <\/strong>for individual <em>x<\/em>.\u00a0<\/p>\n\n\n\n<p>An action <em>a <\/em>incurs a <strong>cost <\/strong>to an individual <em>x<\/em>, which we denote as cost(<em>a, x<\/em>). The cost function captures both how <em>feasible <\/em>the action <em>a <\/em>is for the individual <em>x<\/em>, and how <em>plausible <\/em>the counterfactual <em>a<\/em>(<em>x<\/em>) is.&nbsp;<\/p>\n\n\n\n<p>Given a set of actions <em>A<\/em>, we define the <strong>recourse cost <\/strong>rc(<em>A, x<\/em>) of an individual <em>x <\/em>as the minimum cost among effective actions if there is one, or otherwise some maximum cost represented as <em>c<\/em><sub>\u221e<\/sub>:&nbsp;<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"874\" height=\"84\" src=\"https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-14.png\" alt=\"\" class=\"wp-image-3776\" srcset=\"https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-14.png 874w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-14-300x29.png 300w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-14-150x14.png 150w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-14-768x74.png 768w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-14-590x57.png 590w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-14-102x10.png 102w\" sizes=\"auto, (max-width: 874px) 100vw, 874px\" \/><\/figure>\n\n\n\n<p>An effective action of minimum cost is also called a <strong>nearest counterfactual explanation<\/strong>.\u00a0We define a <strong>subspace <\/strong><em>X<sub>p<\/sub>\u00a0<\/em>\u2286 <em>X <\/em>using a <strong>predicate <\/strong><em>p<\/em>, which is a conjunction of feature-level predicates of the form \u201c<em>feature<\/em>&#8211;<em>operator<\/em>&#8211;<em>value<\/em>\u201d, e.g., the predicate <em>p <\/em>= (<em>country <\/em>= <em>US<\/em>) \u2227 (<em>education-num<\/em> \u2265<em> <\/em>9) defines instances from the US that have more than 9 years of education.\u00a0<\/p>\n\n\n\n<p>Given a predicate <em>p<\/em>, we define the subpopulation <strong>group <\/strong><em>G<sub>p<\/sub><\/em> \u2208 <em>D <\/em>as the set of affected individuals that satisfy <em>p<\/em>, i.e., <em>G<sub>p<\/sub> <\/em>= {<em>x<\/em> \u2208<em> D|p<\/em>(<em>x<\/em>)}. We further distinguish between the <strong>protected subgroups <\/strong><em>G<sub>p,<\/sub><\/em><sub>1<\/sub> =\u00a0\u00a0 {<em>x\u00a0<\/em>\u2208<em> D <\/em>| <em>p<\/em>(<em>x<\/em>) \u2227 <em>x.X<sub>n<\/sub> <\/em>= 1}\u00a0 and <em>G<sub>p,<\/sub><\/em><sub>0<\/sub> =\u00a0 { <em>x<\/em> \u2208<em> D <\/em>|<em> p<\/em>(<em>x<\/em>) \u2227 <em>x.X<sub>n<\/sub> <\/em>= 0 } . When the predicate <em>p <\/em>is understood, we may omit it in the designation of a group to simplify notation.\u00a0<\/p>\n\n\n\n<p>For a specific action <em>a<\/em>, we naturally define its <strong>effectiveness <\/strong>(eff) for a group <em>G<\/em>, as the proportion of individuals from <em>G <\/em>that achieve recourse through <em>a<\/em>:&nbsp;<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"502\" height=\"90\" src=\"https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-15.png\" alt=\"\" class=\"wp-image-3777\" srcset=\"https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-15.png 502w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-15-300x54.png 300w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-15-150x27.png 150w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-15-102x18.png 102w\" sizes=\"auto, (max-width: 502px) 100vw, 502px\" \/><\/figure>\n\n\n\n<p>We want to examine how recourse is achieved for group <em>G <\/em>through a set of possible actions <em>A<\/em>. We define the <strong>aggregate effectiveness <\/strong>(aeff) of <em>A <\/em>for <em>G <\/em>in two distinct ways.&nbsp;<\/p>\n\n\n\n<p>In the <em>micro viewpoint<\/em>, the individuals in the group are considered independently, and each may choose the action that benefits itself the most. Concretely, we define the <strong>micro-effectiveness <\/strong>of set of actions <em>A <\/em>for group <em>G <\/em>as the proportion of individuals in <em>G <\/em>that can achieve recourse through <em>some <\/em>action in <em>A<\/em>, i.e.:&nbsp;<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"598\" height=\"72\" src=\"https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-16.png\" alt=\"\" class=\"wp-image-3778\" srcset=\"https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-16.png 598w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-16-300x36.png 300w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-16-150x18.png 150w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-16-590x71.png 590w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-16-102x12.png 102w\" sizes=\"auto, (max-width: 598px) 100vw, 598px\" \/><\/figure>\n\n\n\n<p>In the <em>macro viewpoint<\/em>, the group is considered as a whole, and an action is applied collectively to all individuals in the group. Concretely, we define the <strong>macro-effectiveness <\/strong>of set of actions <em>A <\/em>for group <em>G <\/em>as the largest proportion of individuals in <em>G <\/em>that can achieve recourse through <em>the same <\/em>action in <em>A<\/em>, i.e.:\u00a0<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"542\" height=\"66\" src=\"https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-17.png\" alt=\"\" class=\"wp-image-3779\" srcset=\"https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-17.png 542w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-17-300x37.png 300w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-17-150x18.png 150w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-17-102x12.png 102w\" sizes=\"auto, (max-width: 542px) 100vw, 542px\" \/><\/figure>\n\n\n\n<p>For a group <em>G<\/em>, actions <em>A<\/em>, and a cost budget <em>c<\/em>, we define the <strong>in-budget actions <\/strong>as the set of actions that cost at most <em>c <\/em>for any individual in <em>G<\/em>:\u00a0<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"550\" height=\"58\" src=\"https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-18.png\" alt=\"\" class=\"wp-image-3780\" srcset=\"https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-18.png 550w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-18-300x32.png 300w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-18-150x16.png 150w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-18-102x11.png 102w\" sizes=\"auto, (max-width: 550px) 100vw, 550px\" \/><\/figure>\n\n\n\n<p>We define the <strong>effectiveness-cost distribution <\/strong>(ecd) as the function that for a cost budget <em>c <\/em>returns the aggregate effectiveness possible with in-budget actions:\u00a0<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"356\" height=\"44\" src=\"https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-19.png\" alt=\"\" class=\"wp-image-3781\" srcset=\"https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-19.png 356w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-19-300x37.png 300w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-19-150x19.png 150w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-19-102x13.png 102w\" sizes=\"auto, (max-width: 356px) 100vw, 356px\" \/><\/figure>\n\n\n\n<p>We use ecd<em><sub>\u00b5<\/sub><\/em>, ecd<sub>M<\/sub> to refer to the micro, macro viewpoints of aggregate effectiveness. The value ecd(<em>c<\/em>; <em>A, G<\/em>) is the proportion of individuals in <em>G <\/em>that can achieve recourse through actions <em>A <\/em>with cost at most <em>c<\/em>. Therefore, the ecd function has an intuitive probabilistic interpretation. Consider the subspace <em>X<sub>p<\/sub> <\/em>determined by predicate <em>p<\/em>, and define the random variable <em>C <\/em>as the cost required by an instance <em>x<\/em> \u2208<em> X<sub>p<\/sub> <\/em>to achieve recourse. The function ecd(<em>c<\/em>; <em>A, G<sub>p<\/sub><\/em>) is the empirical cumulative distribution function of <em>C <\/em>using sample <em>G<sub>p<\/sub><\/em>.\u00a0<\/p>\n\n\n\n<p>The <strong>inverse effectiveness-cost distribution <\/strong>function ecd<sup>\u22121<\/sup>(<em>\u03d5<\/em>; <em>A, G<\/em>) takes as input an effectiveness level <em>\u03d5 <\/em>\u2208 [0<em>, <\/em>1] and returns the minimum cost required so that <em>\u03d5<\/em>|<em>G<\/em>| individuals achieve recourse<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Definitions for fairness of recourse\u00a0<\/h3>\n\n\n\n<p>We define recourse fairness of classifier <em>h <\/em>for a group <em>G <\/em>by comparing the ecd functions of the protected subgroups <em>G<\/em><sub>0<\/sub>, <em>G<\/em><sub>1<\/sub> in different ways.&nbsp;<\/p>\n\n\n\n<p>The first two definitions are <em>cost-oblivious<\/em>, and apply whenever we can ignore the cost function. Specifically, given a set of actions <em>A <\/em>and a group <em>G<\/em>, we assume that all actions in <em>A <\/em>are considered equivalent, and that the cost of any action is the same for all individuals in the group; i.e., cost(<em>a, x<\/em>) = cost(<em>a<\/em><sup>\u2019<\/sup><em>, x<\/em><sup>\u2032<\/sup>), \u2200<em>a<\/em> \u2260\u00a0<em>a<\/em><sup>\u2032<\/sup> \u2208 <em>A, x<\/em>\u00a0\u2260\u00a0<em>x<\/em><sup>\u2032<\/sup> \u2208 <em>G<\/em>. The definitions simply compare the aggregate effectiveness of a set of actions on the protected subgroups.\u00a0<\/p>\n\n\n\n<p><strong>Equal Effectiveness <\/strong>This definition has a micro and a macro interpretation, and says that the classifier is fair if the same proportion of individuals in the protected subgroups can achieve recourse:&nbsp;&nbsp;<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"354\" height=\"38\" src=\"https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-20.png\" alt=\"\" class=\"wp-image-3782\" srcset=\"https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-20.png 354w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-20-300x32.png 300w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-20-150x16.png 150w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-20-102x11.png 102w\" sizes=\"auto, (max-width: 354px) 100vw, 354px\" \/><\/figure>\n\n\n\n<p><strong>Equal Choice for Recourse <\/strong>This definition has only a macro interpretation and claims that the classifier is fair if the protected subgroups can choose among the same number of sufficiently effective actions to achieve recourse, where sufficiently effective means the actions should work for at least 100<em>\u03d5<\/em>% (for <em>\u03d5 <\/em>\u2208 [0<em>, <\/em>1]) of the subgroup:<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"668\" height=\"44\" src=\"https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-21.png\" alt=\"\" class=\"wp-image-3783\" srcset=\"https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-21.png 668w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-21-300x20.png 300w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-21-150x10.png 150w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-21-590x39.png 590w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-21-102x7.png 102w\" sizes=\"auto, (max-width: 668px) 100vw, 668px\" \/><\/figure>\n\n\n\n<p>The next three definitions assume the cost function is specified, and have both a micro and a macro interpretation.<\/p>\n\n\n\n<p><strong>Equal Effectiveness within Budget <\/strong>The classifier is fair if the same proportion of individuals in the protected subgroups can achieve recourse with a cost at most <em>c<\/em>:<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"364\" height=\"42\" src=\"https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-22.png\" alt=\"\" class=\"wp-image-3784\" srcset=\"https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-22.png 364w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-22-300x35.png 300w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-22-150x17.png 150w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-22-102x12.png 102w\" sizes=\"auto, (max-width: 364px) 100vw, 364px\" \/><\/figure>\n\n\n\n<p><strong>Equal Cost of Effectiveness <\/strong>The classifier is fair if the minimum cost to achieve aggregate effectiveness of <em>\u03d5 <\/em>\u2208 [0<em>, <\/em>1] in the protected subgroups is equal:<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"412\" height=\"44\" src=\"https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-23.png\" alt=\"\" class=\"wp-image-3785\" srcset=\"https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-23.png 412w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-23-300x32.png 300w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-23-150x16.png 150w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-23-102x11.png 102w\" sizes=\"auto, (max-width: 412px) 100vw, 412px\" \/><\/figure>\n\n\n\n<p><strong>Fair Effectiveness-Cost Trade-Off <\/strong>The classifier is fair if the protected subgroups have the same effectiveness-cost distribution, or equivalently for each cost budget <em>c<\/em>, their aggregate effectiveness is equal:<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"458\" height=\"46\" src=\"https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-10.png\" alt=\"\" class=\"wp-image-3771\" srcset=\"https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-10.png 458w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-10-300x30.png 300w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-10-150x15.png 150w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-10-102x10.png 102w\" sizes=\"auto, (max-width: 458px) 100vw, 458px\" \/><\/figure>\n\n\n\n<p>The left-hand side represents the two-sample Kolmogorov-Smirnov statistic for the empirical cumulative distributions (ecd) of the protected subgroups. We say that the classifier is fair with confidence <em>\u03b1 <\/em>if this statistic is less than:<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"284\" height=\"56\" src=\"https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-11.png\" alt=\"\" class=\"wp-image-3772\" srcset=\"https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-11.png 284w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-11-150x30.png 150w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-11-102x20.png 102w\" sizes=\"auto, (max-width: 284px) 100vw, 284px\" \/><\/figure>\n\n\n\n<p>The last definition takes a micro viewpoint and extends the notion of <em>burden <\/em>[5] from literature to the case where not all individuals may achieve recourse. The <strong>mean recourse cost <\/strong>of a group:<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"322\" height=\"82\" src=\"https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-12.png\" alt=\"\" class=\"wp-image-3773\" srcset=\"https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-12.png 322w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-12-300x76.png 300w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-12-150x38.png 150w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-12-102x26.png 102w\" sizes=\"auto, (max-width: 322px) 100vw, 322px\" \/><\/figure>\n\n\n\n<p>considers individuals that cannot achieve recourse through <em>A <\/em>and have a recourse cost of <em>c<\/em><sub>\u221e<\/sub>. To exclude them, we denote as <em>G<\/em><sup>\u2217<\/sup> the set of individuals of <em>G <\/em>that can achieve recourse through an action in <em>A<\/em>, i.e., <em>G<\/em><sup>\u2217<\/sup> =\u00a0\u00a0 {<em>x\u00a0 <\/em><img loading=\"lazy\" decoding=\"async\" width=\"7\" height=\"14\" src=\"file:\/\/\/\/Users\/christsapelas\/Library\/Group%20Containers\/UBF8T346G9.Office\/TemporaryItems\/msohtmlclip\/clip_image002.png\"><em>\u00a0\u00a0G <\/em>| <img loading=\"lazy\" decoding=\"async\" width=\"6\" height=\"14\" src=\"file:\/\/\/\/Users\/christsapelas\/Library\/Group%20Containers\/UBF8T346G9.Office\/TemporaryItems\/msohtmlclip\/clip_image004.png\"><em>\u00a0 a\u00a0 <\/em><img loading=\"lazy\" decoding=\"async\" width=\"7\" height=\"14\" src=\"file:\/\/\/\/Users\/christsapelas\/Library\/Group%20Containers\/UBF8T346G9.Office\/TemporaryItems\/msohtmlclip\/clip_image002.png\"><em>\u00a0 A, h<\/em>(<em>a<\/em>(<em>x<\/em>)) = 1}. Then the <strong>conditional mean recourse cost <\/strong>is the mean recourse cost among those that can achieve recourse:<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"372\" height=\"82\" src=\"https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-13.png\" alt=\"\" class=\"wp-image-3774\" srcset=\"https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-13.png 372w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-13-300x66.png 300w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-13-150x33.png 150w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-13-102x22.png 102w\" sizes=\"auto, (max-width: 372px) 100vw, 372px\" \/><\/figure>\n\n\n\n<p>If <em>G <\/em>= <em>G<\/em><sup>\u2217<\/sup>, the definitions coincide with burden.<\/p>\n\n\n\n<p><strong>Equal (Conditional) Mean Recourse <\/strong>The classifier is fair if the (conditional) mean recourse cost for the protected subgroups is the same:<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"322\" height=\"44\" src=\"https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-24.png\" alt=\"\" class=\"wp-image-3787\" srcset=\"https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-24.png 322w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-24-300x41.png 300w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-24-150x20.png 150w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-24-102x14.png 102w\" sizes=\"auto, (max-width: 322px) 100vw, 322px\" \/><\/figure>\n\n\n\n<p>Note that when group <em>G <\/em>is the entire dataset of affected individuals, and all individuals can achieve recourse through <em>A<\/em>, this fairness notion coincides with fairness of burden [5].<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Auditing Fairness of Recourse<\/h3>\n\n\n\n<p>Let\u2019s now see how these definitions can be exploited in our framework, FACTS (Fairness-aware Counterfactuals for Subgroups), to audit the difficulty of achieving recourse in subgroups. The output of FACTS comprises population groups that are assigned (a) an unfairness score that captures the disparity between protected subgroups according to different fairness definitions and allows us to rank groups, and (b) a user-intuitive, easily explainable counterfactual summary, which we term <em>Comparative Subgroup Counterfactuals <\/em>(CSC).<\/p>\n\n\n\n<p>Figure 2 presents an example result of FACTS derived from the adult dataset<a><\/a><a href=\"#_ftn1\" id=\"_ftnref1\">[1]<\/a>. The \u201cif clause\u201d represents the subgroup <em>G<sub>p<\/sub><\/em>, which contains all the affected individuals that satisfy the predicate:<\/p>\n\n\n\n<p><em>p=(hours-per-week =FullTime)<\/em><img loading=\"lazy\" decoding=\"async\" width=\"11\" height=\"14\" src=\"\"><em>(marital-status = Married-civ-spouse)<\/em><em> <\/em><img loading=\"lazy\" decoding=\"async\" width=\"6\" height=\"14\" src=\"\"><em>&nbsp;(occupation = Adm-clerical)<\/em><\/p>\n\n\n\n<p>The information below the predicate refers to the protected subgroups <em>G<sub>p,<\/sub><\/em><sub>0<\/sub> and <em>G<sub>p,<\/sub><\/em><sub>1<\/sub> which are the female and male individuals of <em>G<sub>p<\/sub> <\/em>respectively. With blue color, we highlight the percentage cov(<em>G<sub>p,i<\/sub><\/em>) = |<em>G<sub>p,i<\/sub> | \/ |D<sub>i<\/sub> | <\/em>which serves as an indicator of the size of the protected subgroup. The most important part of the representation is the actions applied that appear below each protected subgroup and are evaluated in terms of fairness metrics. In this example, the metric is the <em>Equal Cost of Effectiveness <\/em>with effectiveness threshold <em>\u03d5 <\/em>= 0<em>.<\/em>7. For <em>G<sub>p,<\/sub><\/em><sub>0<\/sub> there is no action surpassing the threshold <em>\u03d5 <\/em>= 0<em>.<\/em>7, therefore we display a message accordingly. On the contrary, the action:<\/p>\n\n\n\n<p><em>a <\/em>= {<em>hours-per-week <\/em><em>\u00e0<\/em><em> Overtime, occupation <\/em><em>\u00e0<\/em><em> Exec-managerial<\/em>}<\/p>\n\n\n\n<p>has effectiveness 0<em>.<\/em>72 <em>&gt; \u03d5 <\/em>for <em>G<sub>p,<\/sub><\/em><sub>1<\/sub>, thus allowing a respective 72% of the male individuals of <em>G<sub>p<\/sub> <\/em>to achieve recourse. The unfairness score is \u201cinf\u201d, since no recourse is achieved for the female subgroup.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"936\" height=\"118\" src=\"https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-25.png\" alt=\"\" class=\"wp-image-3788\" srcset=\"https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-25.png 936w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-25-300x38.png 300w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-25-150x19.png 150w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-25-768x97.png 768w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-25-590x74.png 590w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-25-102x13.png 102w\" sizes=\"auto, (max-width: 936px) 100vw, 936px\" \/><figcaption class=\"wp-element-caption\"><em>Figure 2: CSC for a highly biased subgroup in terms of Equal Cost of Effectiveness with \u03d5 = 0.7.<\/em><\/figcaption><\/figure>\n\n\n\n<p><strong>Method overview <\/strong>Deploying FACTS comprises three main steps: (a) <em>Subgroup and action space generation <\/em>that creates the sets of groups &nbsp;<img loading=\"lazy\" decoding=\"async\" width=\"7\" height=\"14\" src=\"\">&nbsp;and actions <em>A <\/em>to examine; (b) <em>Counterfactual summaries generation <\/em>that applies appropriate actions to each group <em>G <\/em><img loading=\"lazy\" decoding=\"async\" width=\"17\" height=\"14\" src=\"\"><em>&nbsp;<\/em>; and (c) <em>CSC construction and fairness ranking <\/em>that applies the fairness definitions defined above. Next, we describe these steps in detail.<\/p>\n\n\n\n<p><strong>(a)<\/strong> <strong>Subgroup and action space generation<\/strong>. Subgroups are generated by executing the fp-growth algorithm between the common subgroups of the protected populations. The set of all actions <em>A <\/em>is generated by executing fp-growth on the unaffected population to increase the chance of identifying more effective actions and to reduce the computational complexity. The above process is parameterizable with respect to the selection of the protected attribute(s) and the minimum frequency threshold for obtaining candidate subgroups. We emphasize that alternate mechanisms to generate the sets <em>A <\/em>and G are possible.<\/p>\n\n\n\n<p><strong>(b) Counterfactual summaries generation. <\/strong>For each subgroup <em>G<sub>p<\/sub><\/em><img loading=\"lazy\" decoding=\"async\" width=\"17\" height=\"14\" src=\"file:\/\/\/\/Users\/christsapelas\/Library\/Group%20Containers\/UBF8T346G9.Office\/TemporaryItems\/msohtmlclip\/clip_image002.png\">, the following steps are performed: (i) Find valid actions, i.e., the actions in <em>A <\/em>that contain a subset of the features appearing in <em>p <\/em>and at least one different value in these features; (ii) For each valid action <em>a <\/em>compute eff(<em>a, G<sub>p,<\/sub><\/em><sub>0<\/sub>) and eff(<em>a, G<sub>p,<\/sub><\/em><sub>1<\/sub>). The aforementioned process extracts, for each subgroup <em>G<sub>p<\/sub><\/em>, a subset <em>V<sub>p<\/sub> <\/em>of the actions <em>A<\/em>, with each action having exactly the same cost for all individuals of <em>G<sub>p<\/sub><\/em>. Therefore, individuals of <em>G<sub>p,<\/sub><\/em><sub>0<\/sub> and <em>G<sub>p,<\/sub><\/em><sub>1<\/sub> are evaluated in terms of subgroup-level actions, with a fixed cost for all individuals of the subgroup, in contrast to methods that rely on aggregating the cost of individual counterfactuals. This approach provides a key advantage to our method in cases where the definition of the exact costs for actions is either difficult or ambiguous: a misguided or even completely erroneous attribution of a cost to an action will equally affect all individuals of the subgroup and only to the extent that the respective fairness definition allows it. In the setting of individual counterfactual cost aggregation, changes in the same sets of features could lead to highly varying action costs for different individuals within the same subgroup.<\/p>\n\n\n\n<p><strong>(c) CSC construction and fairness ranking. <\/strong>FACTS evaluates all presented definitions on all subgroups, producing an unfairness score per definition, per subgroup. In particular, each definition quantifies a different aspect of the difficulty for a protected subgroup to achieve recourse. This quantification directly translates to difficulty scores for the protected subgroups <em>G<sub>p,<\/sub><\/em><sub>0<\/sub> and <em>G<sub>p,<\/sub><\/em><sub>1<\/sub> of each subgroup <em>G<sub>p<\/sub><\/em>, which we compare accordingly (computing the absolute difference between them) to arrive at the unfairness score of each <em>G<sub>p<\/sub> <\/em>based on a specific fairness metric.<\/p>\n\n\n\n<p>The outcome of this process is the generation, for each fairness definition, of a ranked list of CSC representations, in decreasing order of their unfairness score. Apart from unfairness ranking, the CSC representations will allow users to intuitively understand unfairness by directly comparing differences in actions between the protected populations within a subgroup.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Experimental Findings<\/h2>\n\n\n\n<p>We now present some indicative findings from our evaluation of FACTS, focusing on the Adult dataset1; more information about the datasets, experimental setting, and additional results can be found in the <a>appendix<\/a><a href=\"#_msocom_1\">[GU1]<\/a>&nbsp; in [9]. The code is available at: <a href=\"https:\/\/github.com\/AutoFairAthenaRC\/FACTS\">https:\/\/github.com\/AutoFairAthenaRC\/FACTS<\/a>. First, we briefly describe the experimental setting and then present and discuss Comparative Subgroup Counterfactuals for subgroups ranked as the most unfair according to some presented definitions.<\/p>\n\n\n\n<p><strong>Experimental Setting. <\/strong>The first step was the dataset cleanup (e.g., removing missing values and duplicate features, creating bins for continuous features like age). The resulting dataset was split randomly with a 70:30 split ratio and was used to train and test respectively a logistic regression model (consequently used as the black-box model to audit). For the generation of the subgroups and the set of actions we used fp-growth with a 1% support threshold on the test set. We also implemented various cost functions, depending on the type of feature, i.e., categorical, ordinal, and numerical.<\/p>\n\n\n\n<p><strong>Unfair subgroups. <\/strong>&nbsp;The following table presents three subgroups which were ranked at position 1 according to three different definitions: <em>Equal Cost of Effectiveness (\u03d5 = 0.7)<\/em>, <em>Equal Choice for Recourse (\u03d5 = 0.7) <\/em>and <em>Equal Cost of Effectiveness (\u03d5 = 0.3)<\/em>, meaning that these subgroups were detected to have the highest unfairness according to the respective definitions. For each subgroup, its <em>rank<\/em>, <em>bias against<\/em>, and <em>unfairness score <\/em>are provided for all definitions presented in the left-most column. When the unfairness score is 0, we display the value \u201cFair\u201d in the rank column. Note that subgroups with exactly the same score w.r.t. a definition will receive the same rank. The CSC representations for the fairness metric that ranked the three subgroups of Table <a href=\"#_bookmark6\">1 <\/a>at the first position are shown in Figure 3.<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td rowspan=\"2\">&nbsp;<\/td><td>&nbsp;<\/td><td><\/td><td><strong>Subgroup 1<\/strong><strong><\/strong><\/td><td>&nbsp;<\/td><td><strong>Subgroup 2<\/strong><\/td><td><\/td><td>&nbsp;<\/td><td><strong>Subgroup 3<\/strong><\/td><td><\/td><\/tr><tr><td>rank<\/td><td>bias against\u00a0\u00a0 \u00a0<\/td><td>unfairness score<\/td><td>rank<\/td><td>bias against \u00a0<\/td><td>unfairness score<\/td><td>rank<\/td><td>bias against\u00a0<\/td><td>unfairness score<\/td><\/tr><tr><td>Equal Effectiveness<\/td><td>2950<\/td><td>Male<\/td><td>0.11<\/td><td>10063<\/td><td>Female<\/td><td>0.0004<\/td><td>275<\/td><td>Female<\/td><td>\u00a0\u00a0\u00a00.32<\/td><\/tr><tr><td>Equal Choice for Recourse (<em>\u03d5 <\/em>= 0<em>.<\/em>3)<\/td><td>Fair<\/td><td>&#8211;<\/td><td>0<\/td><td>12<\/td><td>Female<\/td><td>2<\/td><td>Fair<\/td><td>&#8211;<\/td><td>0<\/td><\/tr><tr><td>Equal Choice for Recourse (<em>\u03d5 <\/em>= 0<em>.<\/em>7)<\/td><td>6<\/td><td>Male<\/td><td>1<\/td><td><strong>1<\/strong><strong><\/strong><\/td><td>Female<\/td><td>\u00a0\u00a0\u00a0\u00a0\u00a06<\/td><td>Fair<\/td><td>&#8211;<\/td><td>0<\/td><\/tr><tr><td>Equal Effectiveness within Budget (<em>c <\/em>= 5)<\/td><td>Fair<\/td><td>&#8211;<\/td><td>0<\/td><td>2806<\/td><td>Female<\/td><td>0.056<\/td><td>70<\/td><td>Female<\/td><td>0.3<\/td><\/tr><tr><td>Equal Effectiveness within Budget (<em>c <\/em>= 10)<\/td><td>2350<\/td><td>Male\u00a0<\/td><td>0.11<\/td><td>8518<\/td><td>Female<\/td><td>0.0004<\/td><td>226<\/td><td>Female<\/td><td>\u00a00.3<\/td><\/tr><tr><td>Equal Effectiveness within Budget (<em>c <\/em>= 18)<\/td><td>2675<\/td><td>Male<\/td><td>\u00a00.11<\/td><td>9222<\/td><td>Female<\/td><td>\u00a00.0004<\/td><td>272<\/td><td>Female<\/td><td>\u00a00.3<\/td><\/tr><tr><td>Equal Cost of Effectiveness (<em>\u03d5 <\/em>= 0<em>.<\/em>3)<\/td><td>Fair<\/td><td>&#8211;<\/td><td>0<\/td><td>Fair<\/td><td>&#8211;<\/td><td>0<\/td><td><strong>1<\/strong><strong><\/strong><\/td><td>Female<\/td><td>inf<\/td><\/tr><tr><td>Equal Cost of Effectiveness (<em>\u03d5 <\/em>= 0<em>.<\/em>7)<\/td><td><strong>1<\/strong><strong><\/strong><\/td><td>Male<\/td><td>inf<\/td><td>12<\/td><td>Female<\/td><td>2<\/td><td>Fair<\/td><td>&#8211;<\/td><td>0<\/td><\/tr><tr><td>Fair Effectiveness-Cost Trade-Off<\/td><td>4065<\/td><td>Male<\/td><td>\u00a00.11<\/td><td>3579<\/td><td>Female<\/td><td>0.13<\/td><td>306<\/td><td>Female<\/td><td>\u00a00.32<\/td><\/tr><tr><td>Equal (Conditional) Mean Recourse<\/td><td>Fair<\/td><td>&#8211;\u00a0<\/td><td>\u00a00<\/td><td>3145<\/td><td>Female<\/td><td>\u00a00.35<\/td><td>Fair<\/td><td>&#8211;<\/td><td>\u00a0\u00a0\u00a0\u00a0\u00a0 0<\/td><\/tr><\/tbody><\/table><figcaption class=\"wp-element-caption\"><em>Table 1: Unfair subgroups identified in the Adult dataset.<\/em><\/figcaption><\/figure>\n\n\n\n<p>Subgroup 1 is ranked first (highly unfair) based on <em>Equal Cost of Effectiveness with \u03d5 <\/em>= 0<em>.<\/em>7, while it is ranked much lower or it is even considered as fair according to most of the remaining definitions (see the values of the \u201crank\u201d column for Subgroup 1). The same pattern is observed for Subgroup 2 and Subgroup 3: they are ranked first based on <em>Equal Choice for Recourse with \u03d5 <\/em>= 0<em>.<\/em>7 and <em>Equal Cost of Effectiveness with \u03d5 <\/em>= 0<em>.<\/em>3 accordingly, but much lower according to the remaining definitions. This finding provides a strong indication of the utility of the different fairness definitions, i.e., the fact that they are able to capture different aspects of the difficulty in achieving recourse.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"936\" height=\"766\" src=\"https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-26.png\" alt=\"\" class=\"wp-image-3790\" style=\"width:840px;height:auto\" srcset=\"https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-26.png 936w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-26-300x246.png 300w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-26-150x123.png 150w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-26-768x629.png 768w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-26-428x350.png 428w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-26-87x71.png 87w\" sizes=\"auto, (max-width: 936px) 100vw, 936px\" \/><figcaption class=\"wp-element-caption\"><em>Figure 3:Comparative Subgroup Counterfactuals for the subgroups<\/em><\/figcaption><\/figure>\n\n\n\n<p>What is more, these different aspects can easily be motivated by real-world auditing needs. For example, out of the aforementioned definitions, <em>Equal Cost of Effectiveness with \u03d5 <\/em>= 0<em>.<\/em>7 would be suitable in a scenario where a horizontal intervention to support a subpopulation needs to be performed, but a limited number of actions is affordable. In this case, the macro viewpoint demonstrated in the CSC Subgroup 1 (top result in Figure 3) serves exactly this purpose: one can easily derive single, group-level actions that can effectively achieve recourse for a desired percentage of the unfavored<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Recap<\/h2>\n\n\n\n<p>Fairness of recourse is an important notion of fairness, yet still understudied. In this series of posts, we presented a framework comprising a set of fairness definitions as well as an efficient auditing mechanism utilizing them to audit fairness of recourse. You may find a detailed description in our published work [9] as well as additional discussion on the topic in related works [5,6,7,10].<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Acknowledgments<\/h2>\n\n\n\n<p>This work has been performed by: <em>Loukas Kavouras, Konstantinos Tsopelas, Giorgos Giannopoulos,<\/em> <em>Dimitris Sacharidis, Eleni Psaroudaki, Nikolaos Theologitis, Dimitrios Rontogiannis, Dimitris Fotakis, Ioannis Z. Emiris<\/em> and has been funded by the European Union\u2019s Horizon Europe research and innovation programme under Grant Agreement No. 101070568 (AutoFair).<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td>[1]<\/td><td>European Commission, Directorate-General for Justice and Consumers, Gerards, J., Xenidis, R., Algorithmic discrimination in Europe \u2013 Challenges and opportunities for gender equality and non-discrimination law, Publications Office, 2021, <a href=\"https:\/\/data.europa.eu\/doi\/10.2838\/544956\">https:\/\/data.europa.eu\/doi\/10.2838\/544956<\/a>.<\/td><\/tr><tr><td>[2]<\/td><td>Marc P Hauer, Johannes Kevekordes, and Maryam Amir Haeri. Legal perspective on possible fairness measures\u2013a legal discussion using the example of hiring decisions. Computer Law &amp; Security Review, 42:105583, 2021.<\/td><\/tr><tr><td>[3]<\/td><td>Barocas, Solon, Moritz Hardt, and Arvind Narayanan.&nbsp;Fairness and machine learning: Limitations and opportunities. MIT Press, 2023.<\/td><\/tr><tr><td>[4]<\/td><td>Sahil Verma and Julia Rubin. Fairness definitions explained. In Proceedings of the International Workshop on Software Fairness, FairWare \u201918, page 1\u20137, New York, NY, USA, 2018. Association for Computing Machinery.<\/td><\/tr><tr><td>[5]<\/td><td>Shubham Sharma, Jette Henderson, and Joydeep Ghosh. CERTIFAI: A common framework to provide explanations and analyse the fairness and robustness of black-box models. In AIES, pages 166\u2013172. ACM, 2020.<\/td><\/tr><tr><td>[6]<\/td><td>Alejandro Kuratomi, Evaggelia Pitoura, Panagiotis Papapetrou, Tony Lindgren, and Panayiotis Tsaparas. Measuring the burden of (un) fairness using counterfactuals. In Machine Learning and Principles and Practice of Knowledge Discovery in Databases: International Workshops of ECML PKDD 2022, Grenoble, France, September 19\u201323, 2022, Proceedings, Part I, pages 402\u2013417. Springer, 2023.<\/td><\/tr><tr><td>[7]<\/td><td>Julius von K\u00fcgelgen, Amir-Hossein Karimi, Umang Bhatt, Isabel Valera, Adrian Weller, and Bernhard Sch\u00f6lkopf. On the fairness of causal algorithmic recourse. In AAAI, pages 9584\u20139594. AAAI Press, 2022.<\/td><\/tr><tr><td>[8]<\/td><td>Suresh Venkatasubramanian and Mark Alfano. The philosophical basis of algorithmic recourse. In Proceedings of the 2020 conference on fairness, accountability, and transparency, pages 284\u2013293, 2020.<\/td><\/tr><tr><td>[9]<\/td><td>Loukas Kavouras, Konstantinos Tsopelas, Giorgos Giannopoulos, Dimitris Sacharidis, Eleni Psaroudaki, Nikolaos Theologitis, Dimitrios Rontogiannis, Dimitris Fotakis, Ioannis Z. Emiris: Fairness Aware Counterfactuals for Subgroups. In NeurIPS 2023.<\/td><\/tr><tr><td>[10]<\/td><td>Bell, Andrew, Joao Fonseca, Carlo Abrate, Francesco Bonchi, and Julia Stoyanovich. &#8220;Fairness in Algorithmic Recourse Through the Lens of Substantive Equality of Opportunity.&#8221; arXiv preprint arXiv:2401.16088 (2024).<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">Bio Notes<\/h2>\n\n\n\n<div class=\"wp-block-media-text is-stacked-on-mobile\" style=\"grid-template-columns:27% auto\"><figure class=\"wp-block-media-text__media\"><img loading=\"lazy\" decoding=\"async\" width=\"224\" height=\"280\" src=\"https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-27.png\" alt=\"\" class=\"wp-image-3791 size-full\" srcset=\"https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-27.png 224w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-27-120x150.png 120w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-27-57x71.png 57w\" sizes=\"auto, (max-width: 224px) 100vw, 224px\" \/><\/figure><div class=\"wp-block-media-text__content\">\n<p>Dimitris Sacharidis is an assistant professor at the Data Science and Engineering Lab of the Universit\u00e9 Libre de Bruxelles. Prior to that he was an assistant professor at the Technical University of Vienna, and a Marie Sk\u0142odowska Curie fellow at the &#8220;Athena&#8221; Research Center and at the Hong Kong University of Science and Technology. He finished his PhD and undergraduate studies on Computer Engineering at the National Technical University of Athens, while in between he obtained an MSc in Computer Science from the University of Southern California. His research interests include data science, data engineering, and responsible AI. He has served as a PC member and has been involved in the organization of top related conferences, and has acted as a reviewer and editorial member of associated journals.<\/p>\n<\/div><\/div>\n\n\n\n<div class=\"wp-block-media-text is-stacked-on-mobile\" style=\"grid-template-columns:28% auto\"><figure class=\"wp-block-media-text__media\"><img loading=\"lazy\" decoding=\"async\" width=\"220\" height=\"274\" src=\"https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-28.png\" alt=\"\" class=\"wp-image-3792 size-full\" srcset=\"https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-28.png 220w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-28-120x150.png 120w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-28-57x71.png 57w\" sizes=\"auto, (max-width: 220px) 100vw, 220px\" \/><\/figure><div class=\"wp-block-media-text__content\">\n<p>Giorgos Giannopoulos is a Scientific Associate at Athena Research Center, Greece and at the National Observatory of Athens. He has performed research on the fields of Information Retrieval, Machine Learning, Data Integration, Geospatial Data Management and Analysis and Semantic Web, while he has been involved in the specification, design and development of cataloguing services for European Research Infrastructures. His current research interests focus on the adaptation and extension of Machine\/Deep Learning methods in various disciplines, including: fairness and explainability of ML algorithms and pipelines; ML-driven next day prediction of forest fires; pattern recognition on RES timeseries; explanation and prediction of Solar events; ML-driven data integration and annotation; and fact checking.<\/p>\n<\/div><\/div>\n\n\n\n<div class=\"wp-block-media-text is-stacked-on-mobile\" style=\"grid-template-columns:28% auto\"><figure class=\"wp-block-media-text__media\"><img loading=\"lazy\" decoding=\"async\" width=\"200\" height=\"302\" src=\"https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-29.png\" alt=\"\" class=\"wp-image-3793 size-full\" srcset=\"https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-29.png 200w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-29-199x300.png 199w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-29-99x150.png 99w, https:\/\/wp.sigmod.org\/wp-content\/uploads\/2024\/09\/image-29-47x71.png 47w\" sizes=\"auto, (max-width: 200px) 100vw, 200px\" \/><\/figure><div class=\"wp-block-media-text__content\">\n<p>Loukas Kavouras is a Scientific Associate at Athena Research Center, Greece. He has performed research on the fields of Machine Learning, Online and Approximation Algorithms, and Resource Allocation Problems. He finished his PhD and his MSc in Computer Science at the School of Electrical and Computer Engineering, at the National Techinal University of Athens (NTUA) and his undergraduate studies in Applied Mathematical and Physical Sciences at the NTUA. His current research interests include data science, data engineering and fairness\/explainability of ML algorithms<\/p>\n<\/div><\/div>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Introduction\u00a0 Fairness is a fundamental principle reflecting our innate sense of justice and equity. The essence of fairness lies in the equitable, unbiased and just treatment of all individuals.\u00a0 In our previous post (part I), we provided an introduction to the bias of recourse problem. In this post (part II), we describe our framework for [&hellip;]<\/p>\n","protected":false},"author":104,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":true,"template":"","format":"standard","meta":{"footnotes":""},"categories":[131],"tags":[],"coauthors":[177],"class_list":["post-3760","post","type-post","status-publish","format-standard","hentry","category-fairness"],"views":229,"_links":{"self":[{"href":"https:\/\/wp.sigmod.org\/index.php?rest_route=\/wp\/v2\/posts\/3760","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/wp.sigmod.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/wp.sigmod.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/wp.sigmod.org\/index.php?rest_route=\/wp\/v2\/users\/104"}],"replies":[{"embeddable":true,"href":"https:\/\/wp.sigmod.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=3760"}],"version-history":[{"count":8,"href":"https:\/\/wp.sigmod.org\/index.php?rest_route=\/wp\/v2\/posts\/3760\/revisions"}],"predecessor-version":[{"id":3799,"href":"https:\/\/wp.sigmod.org\/index.php?rest_route=\/wp\/v2\/posts\/3760\/revisions\/3799"}],"wp:attachment":[{"href":"https:\/\/wp.sigmod.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=3760"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/wp.sigmod.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=3760"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/wp.sigmod.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=3760"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/wp.sigmod.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcoauthors&post=3760"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}