OUR PROCESS
How It Works
Our Framework
The Seven Governance Dimensions
Every bill is scored across seven dimensions to capture the spectrum of AI governance. Below are the key questions that define each governance dimension. You can learn more about how we developed these questions further down the page.
Accountability & Transparency
Risk identification and mitigation, lifecycle of impact assessments, documentation and transparency, auditing and compliance, precautionary measures, and licensing.
Key Questions
- ·Does the bill require covered entities (as defined in the bill) to conduct Impact/Risk Assessment (IA/RA) or similar evaluations?
- ·Does the bill provide compensation and civil recourse for those affected by harms?
- ·If the bill includes auditing requirements or you select 'Yes' for A28, does the bill define how frequently auditing should occur (i.e., single point or regular intervals)?
- ·Does the bill impose explicit bans on AI systems, such as preventing deployment due to safety risks or requiring compliance before use?
- ·Does the bill require tools of resilience, e.g., kill switches, recalls, emergency training and protocols, or establishing thresholds at which a deployed system should be shut down?
Data Protection
Privacy rights, data sensitivity, collection and minimization, usage and retention, transfer and sharing, deletion, security, and data subject rights.
Key Questions
- ·Does the bill establish a private right of action?
- ·Does the bill have specific requirements for handling sensitive data?
- ·Does the bill specify guidelines or limitations regarding data collection practices?
- ·Does the bill require organizations to document the purposes for which personal data is collected, used, processed or retained?
- ·Does the bill address how individuals can request deletion of their data?
Bias & Discrimination
Impact and mitigation practices to prevent discriminatory outcomes.
Key Questions
- ·Does the bill identify specific sectors or domains where bias provisions apply?
- ·Does the bill restrict the use of AI systems that exhibit potentially discriminatory outcomes?
- ·Does the bill mandate ongoing monitoring and evaluation of AI systems for bias?
Labor Force
Job displacement, upskilling initiatives, and industry-government collaboration.
Key Questions
- ·Does the bill contain provisions aimed at expanding the workforce in the AI Economy?
- ·Does the bill call for training the labor force for AI-related skills?
- ·Does the bill specify partners to collaborate with to research the impact of AI on the labor force?
- ·Does the bill call for the analysis of challenges faced by workers affected by automation or AI implementation?
Institution
Development of new institutions for governance, an elaboration of interagency collaboration, and mechanisms for enforcement.
Key Questions
- ·Does the bill mandate the establishment of a new entity?
AI & Education
AI literacy, curriculum integration, classroom governance, student protections, surveillance limits, consent, and implementation mechanisms.
Key Questions
- ·Does the bill add onto or make revisions to the Education Code?
- ·Does the bill suggest resources for teachers to regulate use of AI in classrooms?
- ·Does the bill seek the recommendation and inputs of teachers or educators?
- ·Does the bill institute a task force to carry out the proposed actions?
Synthetic Content
Definitions of AI-generated media, consent and disclosure requirements, harm and intent standards, contextual distinctions and penalties.
Key Questions
- ·Does the bill require consent from individuals or entities that may be the subject of the synthetic content?
- ·Does the bill require that the sexual content be "realistic" to be criminalized?
- ·Does the bill establish civil remedies for harmed individuals?
- ·Does the bill establish liability for platforms that distribute misleading or deceptive synthetic content?
Methodology
How Key Questions Were Determined
The key questions were selected to represent the longer set of scoring questions by identifying provisions that most clearly reflect the practical impact and structure of a bill within each dimension.
This process was led by our Head of Policy, Tomo Lazovich, who reviewed each module and determined which questions most meaningfully represent the breadth of that category. The goal was not to reduce complexity, but to ensure that the profile reflects the core regulatory posture of a bill while remaining readable and comparable across jurisdictions.
Because each module contains many detailed questions, we needed a way to surface high-level indicators on the bill profile without overwhelming the reader. The selected questions were those that:
Capture concrete requirements such as impact assessments, sensitive data handling, monitoring obligations, and more
Signal enforceability such as private right of action, civil remedies, platform liability, and more
Indicate structural mechanisms such as bans, auditing frequency, institutional design, and more
Interpretation
How to Read the Profiles
The Spider Graph
Each axis represents one governance dimension (e.g., Accountability, Data Protection, Bias, AI & Education, Synthetic Content).
- ·A farther distance from the center indicates greater policy coverage, specificity, or enforcement strength in that dimension.
- ·A closer point to the center indicates limited coverage, vague language, or absence of enforceable mechanisms.
The overall shape reflects the bill's governance profile, whether it is balanced across dimensions or concentrated in specific areas while a more circular, evenly distributed shape suggests comprehensive coverage, while sharp spikes indicate targeted regulation in select domains.
Limitations: What Does "Zero" Mean?
A score of zero does not necessarily mean "no governance." It may indicate:
- ·The issue is addressed under a different legal framework not captured in this dimension.
- ·The bill references existing statutes rather than introducing new provisions.
- ·The language is too vague or indirect to meet the scoring threshold.
- ·The bill intentionally limits scope to a specific domain (e.g., only election deepfakes).
Therefore, a zero reflects absence within this scoring framework, not necessarily absence of regulation in the broader legal ecosystem.
Example
US S 5152
Artificial Intelligence Civil Rights Act of 2024
Roadmap
What's Coming Next
Broadening Our Reach
As AISLE enters its public launch phase, our next focus is expanding its visibility and impact. With the official website live, we are working to ensure the platform reaches policymakers, journalists, researchers, and members of the public who are actively engaging with AI governance. Our goal is not simply to host legislative information, but to provide structured, comparative, and interpretable analysis of AI-related bills across states. In the coming months, we will continue refining the user experience, highlighting high-impact legislation, and strengthening AISLE’s role as a trusted reference point for understanding emerging AI policy trends.
Expanding the Analytical Framework
On the policy front, we are expanding and refining our analytical framework to reflect the rapidly evolving legislative landscape. As states experiment with different approaches to AI governance, from synthetic content regulation to education oversight and transparency mandates, we are continuously assessing whether our scoring dimensions capture the most meaningful areas of regulatory impact. This includes clarifying key questions, refining category definitions, and deepening our analysis of enforceability and institutional design. Our aim is to ensure that AISLE remains both rigorous and adaptable as new policy themes emerge.
NLP & LLM Integration
In parallel, we are advancing research and experimentation in NLP and large language model (LLM) integration. We are exploring how AI-assisted tools, such as automated bill summaries, similarity analysis, and stakeholder-perspective modeling, can responsibly enhance legislative interpretation. These experiments inform both product development and broader research at CNTR, helping us evaluate where AI can meaningfully support governance analysis while maintaining transparency and human oversight. Through this work, AISLE continues to evolve not only as a policy platform, but as a research-driven initiative at the intersection of technology and public accountability.