PSFK iQ

PSFK’s professional-grade research platform, featuring access to our full-report library and on-demand research services.

Take me to PSFK iQ

Want to send us an email?

Email us at sales@psfk.com
Trend: Getting Ahead of AI Bias With Inclusive Algorithms
08/12/22

Trend: Getting Ahead of AI Bias With Inclusive Algorithms

By acknowledging the inherent pitfalls of AI systems and building in bias-controls and data-integrity measures, we can create an ethically positive, and not an ethically ambiguous, future.

The societal impact of machine learning algorithms and artificial intelligence systems is multifaceted. The use of big data and algorithms in a variety of fields, including insurance, advertising, education, and beyond, can lead to decisions that harm the poor, reinforce racism, and amplify inequality. Models relying on false proxies and bad datasets are scalable, amplifying any inherent biases to affect increasingly larger populations. At the same time, these systems can also provide groundbreaking solutions and result in societally positive efficiencies.

AI challenge from BMW wants to beat internal bias with data
BMW Group is launching the “Joyful Diversity with AI” challenge, encouraging participants to come up with new ideas for how AI solutions can help the automaker support diversity, equity and inclusion in its work environment and communications with data-driven solutions. The deadline for submissions is October 3, 2022 and winners will be announced this December. BMW

EU to regulate AI’s impact on life-altering decisions
In order to curb machine-based discrimination, the EU is planning to introduce a comprehensive global template for regulating the type of AI models used to support “high risk” decisions like filtering job, school, or welfare applications, as well as for banks to assess the creditworthiness of potential buyers. All of which are potentially life-altering decisions that impact whether someone can afford a home, a student loan, or even be employed. The Guardian 

FairPlay is the first ‘fairness-as-a-service’ solution to algorithmic bias
Designed primarily for financial institutions, FairPlay’s solution aims to keep the bias of the past from being coded into the algorithms deciding the future, and uses next-gen tools to assess automated decisioning models and increase both fairness and profits for financial institutions. FairPlay

Even artificial intelligence has a Heinz bias when asked for ketchup
Popular A.I. image generator DALL-E 2 shows clear brand preference when provided with generic prompts to create ketchup imagery, showing how even objective models can reinforce pre-existing biases. Heinz

 

Interested in getting trends and insights like this in your inbox? Subscribe to our FREE weekly newsletter to get them as an email!