How can we 𝐢𝐝𝐞𝐧𝐭𝐢𝐟𝐲 𝐮𝐧𝐟𝐚𝐢𝐫 𝐚𝐬𝐬𝐞𝐬𝐬𝐦𝐞𝐧𝐭𝐬 of 𝐀𝐈 𝐦𝐨𝐝𝐞𝐥𝐬 on an individual level and 𝐜𝐨𝐦𝐦𝐮𝐧𝐢𝐜𝐚𝐭𝐞 them to a 𝐡𝐮𝐦𝐚𝐧-𝐢𝐧-𝐭𝐡𝐞-𝐥𝐨𝐨𝐩? And what does this have to do with the 𝐝𝐢𝐞𝐬𝐞𝐥 𝐞𝐦𝐢𝐬𝐬𝐢𝐨𝐧𝐬 𝐬𝐜𝐚𝐧𝐝𝐚𝐥 aka. 𝐬𝐨𝐟𝐭𝐰𝐚𝐫𝐞 𝐝𝐨𝐩𝐢𝐧𝐠?
Building on the brilliant preliminary work of our first author Sebastian Biewer on the topic of 𝘀𝗼𝗳𝘁𝘄𝗮𝗿𝗲 𝗱𝗼𝗽𝗶𝗻𝗴 and on the notation of individual fairness from 𝘍𝘢𝘪𝘳𝘯𝘦𝘴𝘴 𝘵𝘩𝘳𝘰𝘶𝘨𝘩 𝘢𝘸𝘢𝘳𝘦𝘯𝘦𝘴𝘴 (by Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel), the team formally specified a 𝗿𝘂𝗻𝘁𝗶𝗺𝗲 𝗺𝗼𝗻𝗶𝘁𝗼𝗿 resulting in a 𝗴𝗹𝗼𝗯𝗮𝗹, 𝗺𝗼𝗱𝗲𝗹-𝗮𝗴𝗻𝗼𝘀𝘁𝗶𝗰 𝗫𝗔𝗜 𝗺𝗲𝘁𝗵𝗼𝗱 and investigated how such a system can help to make 𝗵𝘂𝗺𝗮𝗻 𝗼𝘃𝗲𝗿𝘀𝗶𝗴𝗵𝘁 (also, but not restricted to the spirit of the EU AIAct) 𝗲𝗳𝗳𝗲𝗰𝘁𝗶𝘃𝗲 𝗮𝗻𝗱 𝗺𝗲𝗮𝗻𝗶𝗻𝗴𝗳𝘂𝗹.
The Journal Article 𝙎𝙤𝙛𝙩𝙬𝙖𝙧𝙚 𝘿𝙤𝙥𝙞𝙣𝙜 𝘼𝙣𝙖𝙡𝙮𝙨𝙞𝙨 𝙛𝙤𝙧 𝙃𝙪𝙢𝙖𝙣 𝙊𝙫𝙚𝙧𝙨𝙞𝙜𝙝𝙩 has been published in 𝘍𝘰𝘳𝘮𝘢𝘭 𝘔𝘦𝘵𝘩𝘰𝘥𝘴 𝘪𝘯 𝘚𝘺𝘴𝘵𝘦𝘮 𝘋𝘦𝘴𝘪𝘨𝘯!