The Ethical and Societal Considerations of an AI Impact Analysis
Published: May 20, 2025
The use of artificial intelligence is rapidly expanding across businesses and industries, driving innovation, improving efficiency, and unlocking new opportunities. However, as AI systems become more integrated into critical decision-making processes and daily business operations, concerns about their ethical and responsible use also continue to rise. Questions surrounding fairness, transparency, and accountability have become increasingly prominent, highlighting the need for a structured approach to evaluating AI’s broader ethical and societal implications.
This concern has driven the need for regular AI impact analyses. An AI impact analysis helps organizations assess not just how well an AI system performs, but how it affects those involved. In this article, we’ll explore what an AI impact analysis is, the benefits it offers, the key ethical and societal considerations for an AI impact analysis, the typical process involved, and practical steps for getting started.
What is an AI Impact Analysis?
An AI impact analysis is a process through which the effect that an AI system has on individuals, groups, or a society is assessed and analyzed. This can be done by the individuals themselves or the organizations involved in the development, sale, or even use of the AI systems.
An AI impact analysis is a structured, consistent, and repeatable process that can be influenced by several of the organization’s key factors, including:
- Area of business
- Strategy
- Legal or regulatory requirements
- Risk appetite
- Culture
- Stakeholder expectations
What is Assessed in an AI Impact Analysis?
An AI impact analysis assesses the purpose and intended use of an AI system, in addition to how well the AI system performs. The analysis can include a description of what the AI system does, what it is designed to do, how it works, its capabilities, architecture, and any technical knowledge or requirements. The analysis should clearly document how users will utilize the AI system and what the benefits will be, as well as potential unintended consequences such as potential or known vulnerabilities/attack types and common threats or exploitation methods.
The analysis documentation should also include various features or expanded details on the data, algorithms, hardware/software requirements, and alerts or monitoring. The AI impact analysis should analyze data information and quality to validate that the training, test, and validation data that the AI system uses is not producing incorrect outputs or outputs that negatively impact individuals, groups, or society. Considerations for the data being used should include where the data is from, data provenance, access and protection of the dataset, and the data quality.
Beyond just the data, the AI system impact assessment needs to consider the selected model as certain models may be susceptible to undesired outcomes such as producing or pushing bias, overfitting, or spurious correlations.
Considerations for the model being used should include:
- Appropriateness of the model, its origins, previous testing and overall credibility of the model
- Robustness and resilience
- When and how the model is retrained
- If continuous learning is implemented and the impact(s)
- How the performance of the model will be evaluated
- Privacy concerns (PII or related data used or generated from the model?)
- If and how the model can be fine-tuned or customized
Lastly, the deployment environment plays an important role in the AI system and should be considered as part of the AI impact assessment, including geographic details, languages, regulatory restrictions, and cultural or behavioral norms. Additionally, the technical aspects or necessary infrastructure of the deployment should be evaluated.
The Benefits of an AI Impact Analysis
A key benefit of an AI impact analysis is that it fosters transparency and trustworthiness of the AI systems that are in use. Primarily, an AI impact analysis enhances the reliability of an AI system in the following ways:
Early detection of potential threats and related risk
- This gives organizations time to respond to and address anything that is identified early in the AI system lifecycle
Enhanced ethical development and use
- Impact assessments help to identify both common use and potential misuse situations which increases the focus on reducing unintended harm to individuals, groups, or society
Informed decision making
- An impact analysis can help provide insight into the functioning of an AI system and can inform leadership on making critical decisions such as resource prioritization, necessary risk/threat mitigation activities, improvement opportunities, or even long-term plans
The Ethical and Societal Considerations of an AI Impact Analysis
To fully assess an AI system’s purpose, functionality, performance, and impact, it is essential to evaluate its broader ethical and societal implications. The key ethical and societal considerations that inform any comprehensive AI impact analysis include the following:
- Transparency:
Relates to information about activities, decisions, and AI system properties that are communicated either internally or externally to stakeholders. This information must be understandable and accessible and can include aspects such as features, performance, design details, and data information. - Fairness/bias:
Relates to the ability of the AI system to be impartial and behave in a way that does not perpetuate bias or discrimination as these may lead to inequity between individuals, groups, or society and may even be further heightened if automated decision making is used by an AI system that unfairly discriminates. - Privacy:
Relates to a data subject’s right to have control over the collection and use of their PII. This consideration expands to further protect PII and ensure that it remains confidential once collected and in use. - Safety:
Relates to considerations that an AI system should not endanger or introduce new safety concerns for human life, health, well-being, property, or the environment. - Accountability/Explainability:
Relates to the ability of individuals or organizations to understand and be able to answer questions on how an AI system came to a certain decision, produced a certain output, or performed a certain action.
As we've seen, conducting an AI impact analysis requires careful attention to a range of ethical and societal factors. With these considerations in mind, let's now take a closer look at the process behind performing an effective analysis.
The AI Impact Analysis Process
A comprehensive AI impact analysis requires a thorough assessment of the AI system and documentation of the findings. The key steps involved in the AI system impact analysis process include:
1. Document system information
- Name, identifiers, author/owner, revisions/versions, features, a description, and intended and potential unintended uses
2. Document information related to the data used by the AI system
- Data owner information, data collection process, data cleaning performed, known bias, data protections in place, data characteristics, metadata
3. Document algorithm or model information
- Origin, resiliency to unintended outcomes, previous testing performed, real world uses, selection criteria, training/testing/validation requirements, performance evaluation, continuous learning impacts, retraining or ability to adjust
4. Document information about deployment
- Deployment location, legal requirements, geographic requirements, cultural considerations, behavioral or social concerns
5. Document any potential benefits or harms
- Based on factors such as accountability, bias, privacy, safety, what benefits the AI system will provide, and what harm it may inflict if proper controls are not implemented
Getting Started with Your AI Impact Analysis
AI systems provide numerous benefits including automation of repetitive or tedious tasks, faster and better data analysis, and enhanced productivity, efficiency, and decision-making. Simultaneously, they have the potential to introduce or exacerbate known challenges such as perpetuation of bias and discrimination, breaches of privacy, damage to the environment, or the spread of misinformation.
By performing an AI system impact analysis and thoroughly evaluating the results, individuals or organizations will be able to identify many of the potential benefits and potential harms that their development, deployment, or use of AI systems may have on individuals, groups, or society at large.
Organizations who fill any AI system role can contact a Schellman expert today to learn more about our Suite of AI Services and discover how we can help ensure your AI systems are trustworthy and secure. And in the meantime, you can discover other helpful AI insights in these additional resources:
About Charles Goss
Charles Goss is a SOC Senior Associate with Schellman. Prior to joining Schellman in 2023, Charles worked as a Senior, Business Consultant at a Big 4 Accounting firm, specializing in Technology Risk (SOX 404/ITGC compliance). Charles also led and supported various other projects, including SDLC Implementation Evaluations, Application Controls Testing, as well as other Internal and External IT audits. Charles additionally has experience with process automation technologies, and has developed and deployed various bots / scripts for both internal and client automation projects. Charles has over 4 years of experience comprised of serving clients in various industries, including Healthcare, Industrial Products, and Consumer Goods. Charles is now focused on ISO 27001, 9001, and 22301 certifications, as well as SOC 1 and SOC 2 reporting for organizations across various industries.