Some Challenges with Generative AI
Introduction
Generative AI (eg ChatGPT, DALL·E, Sora, etc) offers remarkable capabilities.
However, these tools introduce complex challenges that affect individuals, organizations and society.
Top Challenges with Generative AI
- Misinformation and Disinformation
- What’s the issue? Generative AI can convincingly fabricate text, images, audio, or video—making it easy to spread false or misleading content; cannot update their knowledge in real time or generate new ideas.
- Risk: Deepfakes, fake news articles, or bogus research can be weaponized in politics, finance or social movements.
- Example: AI-generated video of a political figure making false statements.
- Bias and Discrimination
- Themes: Quality control and data accuracy
- What’s the issue? AI systems are trained on vast datasets that reflect historical and social biases; its output is tightly bound to the calibre of training data: its output can only be as good as training data, eg ‘bias in, bias out’
- Risk: AI may unintentionally produce outputs that are racist, sexist or culturally insensitive, etc
- Example: An AI image generator producing stereotypical depictions when prompted with certain job roles or ethnicities.
- Privacy and Data Leakage
- What’s the issue? Generative models may inadvertently “leak” personal or sensitive data from training sets.
- Risk: Individuals’ private information could be reproduced or mis-used.
- Example: AI chatbot outputs a real person's address or medical detail from training data.
- Intellectual Property (IP) and Copyright
- What’s the issue? AI can generate content very similar to copyrighted works, raising questions about ownership and originality.
- Risk: Legal disputes over whether outputs infringe on creators' rights.
- Example: AI generates art mimicking a living artist's style without consent.
- Job Displacement
- What’s the issue? Automation of creative, analytical and support tasks can reduce demand for certain human roles.
- Risk: Economies may see disruption in media, design, education and customer service industries.
- Example: AI writing tools replacing entry-level copywriters or paralegals.
- Hallucinations (False Outputs)
- What’s the issue? Generative AI sometimes confidently generates incorrect or fabricated information.
- Risk: Users may trust AI answers that are incorrect or misleading.
- Example: AI invents quotes or citations in a research paper.
- Lack of Transparency (Black Box Problem)
- What’s the issue? It's often unclear how a model makes decisions or why it produces certain outputs.
- Risk: Difficult to audit, explain or regulate decisions—especially in high-stakes areas like healthcare or justice.
- Example: A medical diagnosis tool recommends treatment with no clear explanation.
- Ethical Use and Governance (legal)
- Theme: the generation of AI decisions is opaque and inexplicable; this hinders accountability and trust, while potentially lead to unjust outcomes.
- What’s the issue? There is no universal framework to ensure AI is developed and used responsibly, ie as it lacks the capacity to model the consequences or ethical implications of its decisions.
- Risk: Companies or governments may use AI unethically (e.g., surveillance, propaganda).
- Example: AI used to profile or monitor populations without consent.
- Access and Inequality
- What’s the issue? Powerful generative tools may only be accessible to wealthy organizations or countries.
- Risk: Widening digital divide and imbalance of creative/economic power.
- Example: Small businesses unable to compete with AI-enhanced competitors.
- Dependency and De-skilling
- What’s the issue? Overreliance on AI could erode critical thinking and human creativity.
- Risk: Users may stop verifying content or developing their own judgment.
- Example: Students using AI to complete assignments without understanding the material.
- Resource Usage
- What’s the issue?: Uses large amounts of resources to function like electricity, water, etc.
- Risk: Mis-allocation of finite resources and negative environmental impacts.
- Example: ChatGPT cost in excess of $US 100 million to train and its daily energy requirement is estimated to cost the equivalent of powering 33,000 households.
- Brain-computer interfaces
- What’s the issue? There is a need to establish ethical, global, governance frameworks plus prioritise cognitive liberty and user control over brain data.
- Risk: Developing a high brain foundation models like Chiral raise significant ethical concerns about mental privacy and the potential for misuse by corporations or governments
- Example: This can lead to surveillance capitalism or AI driven manipulation which could threaten human identity, individual autonomy and mental freedom.

(source: https://cms.zerohedge.com/s3/files/inline-images/2023-11-21_09-51-02.jpg)
Summary Table (problems with generating AI)
|
Challenges |
Examples |
Risks |
|
Misinformation |
AI-generated fake news |
Public manipulation |
|
Bias |
Gender stereotypes in job prompts |
Discrimination |
|
Privacy |
Leaking training data |
Identity theft, legal exposure |
|
Copyright/IP |
Copying artistic styles |
Legal claims |
|
Job Displacement |
AI replacing junior analysts |
Economic disruption |
|
Hallucinations |
Inventing facts confidently |
Misinformation, liability |
|
Transparency |
Unexplainable AI decisions |
Lack of accountability |
|
Ethics |
AI used for surveillance |
Rights abuse |
|
Access Inequality |
AI benefits large firms only |
Global disparity |
|
De-skilling |
Students over-relying on AI |
Decline in skills |
|
Resource usage |
Use of electricity, water, etc |
Misallocation of infinite resources |