Innovative Research in AI Interpretability

At Causal Engine Lab, we explore AI interpretability through mixed-methods research, combining quantitative analysis and qualitative insights to enhance understanding and trust in AI-generated outputs.

A computer screen displaying a coding interface with Python code related to machine learning. The code imports libraries like sklearn and deals with model metrics such as precision and recall. A classification report is shown along with a section titled 'Different meta model trained' listing various models like DT, RF, LR, and XGB. Below, there is code for tuning an XGB model using GridSearchCV.
A computer screen displaying a coding interface with Python code related to machine learning. The code imports libraries like sklearn and deals with model metrics such as precision and recall. A classification report is shown along with a section titled 'Different meta model trained' listing various models like DT, RF, LR, and XGB. Below, there is code for tuning an XGB model using GridSearchCV.
Our Mission
Research Excellence

We utilize public datasets and custom tasks, leveraging advanced tools like GPT-4 API and interpretability frameworks to evaluate and improve the credibility of AI outputs through user feedback.

Mixed-Methods Study

Employing quantitative and qualitative analysis for interpretability assessment.

A computer screen displaying analytics dashboards with various charts, including a line graph on the left and a cohort analysis table on the right. The table is populated with different shades of blue, indicating varying levels of user activity over several weeks. Text labels and numbers detail user retention statistics.
A computer screen displaying analytics dashboards with various charts, including a line graph on the left and a cohort analysis table on the right. The table is populated with different shades of blue, indicating varying levels of user activity over several weeks. Text labels and numbers detail user retention statistics.
Comparative Models

Assessing interpretability differences between GPT-4 and GPT-3.5 models.

Two individuals are engaged in a discussion while standing near a whiteboard. One person is actively drawing with a marker, illustrating a diagram with boxes and arrows in red and blue ink. The other person observes attentively. Natural light filters through a nearby window, adding a soft glow to the environment.
Two individuals are engaged in a discussion while standing near a whiteboard. One person is actively drawing with a marker, illustrating a diagram with boxes and arrows in red and blue ink. The other person observes attentively. Natural light filters through a nearby window, adding a soft glow to the environment.
Attention Analysis

Investigating the impact of attention heads on model performance variation.

Contact Us

A close-up image of a book page with text focusing on meditation and scriptural interpretation. The text is primarily in black font, with some headings in a larger or bold type. The page is well-lit, capturing the details of the printed words.
A close-up image of a book page with text focusing on meditation and scriptural interpretation. The text is primarily in black font, with some headings in a larger or bold type. The page is well-lit, capturing the details of the printed words.

For inquiries about our mixed-methods study, please reach out through the form provided.