Modern AI

AI Ethics and Responsibility

Understand the ethical considerations, risks, and responsible development of AI systems.

Why AI Ethics Matters

As AI becomes more powerful and pervasive, its societal impact grows. Ethical AI development ensures systems are:

  • Fair and unbiased
  • Transparent and explainable
  • Safe and reliable
  • Privacy-preserving
  • Accountable

Key Concerns

Bias and Fairness

ML models learn from historical data which may contain human biases. A hiring model trained on past decisions may perpetuate gender or racial discrimination.

Privacy

AI systems often require large amounts of personal data. How is it collected, stored, used, and protected?

Transparency and Explainability

"Black box" models make decisions we can't explain. This is problematic in high-stakes domains (healthcare, criminal justice, lending).

Safety and Alignment

Ensuring AI systems do what we intend and don't cause unintended harm.

Job Displacement

Automation may displace workers. How do we ensure equitable distribution of AI's benefits?

Example

python
import numpy as np
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import classification_report

# Demonstrating algorithmic bias

# Synthetic hiring data
np.random.seed(42)
n = 1000

# Feature: years_experience (0-10)
experience = np.random.randint(0, 10, n)

# Gender (0=male, 1=female) - biased historical data
gender = np.random.choice([0, 1], n, p=[0.6, 0.4])

# Historically biased outcome: males hired at higher rate even with same experience
hired = (experience > 5).astype(int)
# Add gender bias: reduce hiring rate for females
bias_mask = (gender == 1) & (np.random.rand(n) < 0.3)
hired[bias_mask] = 0

# Train model on biased data
X = np.column_stack([experience, gender])
model = LogisticRegression()
model.fit(X, hired)

# Evaluate by gender
for gender_val, gender_name in [(0, 'Male'), (1, 'Female')]:
    mask = gender == gender_val
    X_group = X[mask]
    y_group = hired[mask]
    preds = model.predict(X_group)
    approval_rate = preds.mean()
    print(f"{gender_name} approval rate: {approval_rate:.1%}")

print("\n--- Fairness-aware model ---")
# Remove gender from features
X_fair = experience.reshape(-1, 1)
model_fair = LogisticRegression()
model_fair.fit(X_fair, hired)

for gender_val, gender_name in [(0, 'Male'), (1, 'Female')]:
    mask = gender == gender_val
    X_group = experience[mask].reshape(-1, 1)
    preds = model_fair.predict(X_group)
    print(f"{gender_name} approval rate: {preds.mean():.1%}")
Try it yourself — PYTHON