I’ve always been a self-driven learner, but I’m starting to understand that there are limits to what I can learn on my own. As a teacher, I found myself working hard to establish a “community of learners” in my classroom. Now I’m finding myself at a crossroads where I could really benefit from a learning community of my own. Perhaps by laying out my ideas of interest here some potential mentor or peer group might find me.

My primary area of interest is in the application of technology to improve educational outcomes. I’ve taken a multi-disciplinary approach to building this expertise in both breadth and depth, building on my dual undergraduate degrees in Mathematics and Psychology. My graduate work in Learning and Technology focused on the use of online discussion boards in mathematics and helped prepare me for the emergency learning transition of early 2020. Since then, I’ve developed an interest in the applications of machine learning to help induce positive structural changes in education.

Artificial Intelligence (AI) is rapidly advancing and the field of education stands to be dramatically changed as a result. The question is not if AI will be used in school, but rather when and how. The existence of applications like PhotoMath have already shifted the focus dialog in my math classes from “the result” to “the process”. Large Language Models such as ChatGPT will revolutionize the way we think about student writing the same way. Rather than such tools being banned from classes, schools should instead focus their efforts on making sure students are equipped to collaborate responsibly with AI models.

The field of education presents some unique challenges for AI. The haphazard implementation of AI could cause immense harm, so the first order of business is to develop systems of safeguards against misuse. While AI grows in power with massive data sets, schools have competing legal obligations to be met regarding data privacy, transparency and ethics. Schools will need to have traditional engineering methods in place to detect algorithmic biases with regard to legally protected statuses such as race or gender before they can reliably depend upon consumer AI solutions. Equity needs to be built into AI from the ground up.

One of the challenges posed by AI produced content is our inability to distinguish it from human produced content. Our best tool for detecting AI produced content is to train another AI to do the job for us. These adversarial agents complement each other when trained in parallel. As the AI classifier that detects whether or not an artifact is human produced gets better, it forces the content producing agent to behave more human-like. These two AIs are designed to compete with each other, but the feedback loop created when they interact together allows them both to learn more efficiently.

Applying AI in education is going to require a whole ecosystem of agents who compete or cooperate to provide checks and balances on the values we which to instill. The first step towards this is to looking at education through a lens of game theory that critically examines the reward structure of the school. The challenge is creating a set of macro-level rules for interactions between AI and human agents that promotes collaborative behavior across the system. We need to shift the discussion from “how do we prevent students using AI to cheat?” to “how do we align our assessment methods with actual student growth?”.

One of our highest priorities with AI should be the development of an AI learner advocate. Ideally, the system would allow students and parents to articulate their long-term personal learning goals through natural language in an Individualized Education Program (IEP) and the AI would automatically configure itself as needed to monitor and advance those objectives. For example, an algorithm might be programmed to automatically alert teachers if they accidentally forget to include alt text in an assignment with visually impaired students in the class. It’s also important that students maintain a voice in the learning process, and AI may provide a powerful tool for students who have yet to learn how to self-advocate.

In the long term, AI could also provide considerable time savings to teachers by producing “just-in-time” lesson plans based on the available data. Writing lesson plans is a task that is often more tedious than it is difficult — assuming it’s not a direct copy of one from last semester. We’re reaching a point where AI can potentially automate the process of aligning experiences with instructional objectives, measuring progress relative to the course calendar, recommending accommodations based on IEPs, and formatting this data in a standardized template for administrators. This would free up more time for the teacher to spend customizing the learning experiences for their specific classes.

From a mathematical perspective, algorithmic lesson generation bears some key structural similarities to the “Many-Armed Bandit Problem” in probability theory. Consider the teacher like a gambler in a casino and the slot machines as various potential lessons that can be assigned to a student. The teacher wants to select the experience with the highest odds of paying-out for that student, but not every student will respond to that particular experience in same way. For a teacher to maximize the learning potential for all students, they must strike a balance between exploiting lessons that tended to work in the past with exploring new methods they have yet to try. The more information a teacher has about the student, the more reliably they’re able to predict what might work.

The fact that confidential health information about a student is often directly linked to academic accommodations presents a unique privacy problem for AI. In much the same way that Security needs to be applied across Development and Operations, so too must Accessibility be asserted across educational organizations. A substitute teacher doesn’t need to know every student’s complete medical history, but a classroom AI might be able provide them with the information that “students need a break” without revealing the fact that Johnny has ADD. There is a delicate balance to be made between controlling access to sensitive data on a need-to-know basis and using that data to provide teachers with actionable insights.

As a former teacher, the idea of a computer listening and watching everything I do in the classroom is terrifying. We cannot permit AI to become an electronic police force is schools. At the same time, I also know that the amount of data produced in a live class is more than any single person could analyze on their own. The subtle timing of a student’s confused facial expression can often speak volumes about that student’s understanding. I would not the least be surprised if an AI could even out perform me on this task, but also know there will always be an unavoidable risk that this AI might be wrong and cause irreparable harm. The tech industry mantra of “move fast and break things” is an inappropriate philosophy for schools.

Instead, education must adopt AI slowly and with confidence. Educational data must be treated with the same level of careful stewardship as healthcare data. Researchers need to gradually integrate AI through a series clinical trials and monitor the system as a whole for changes. Much like a teacher, AI needs to build a relationship of trust with all of the stakeholders in the environment. Students and parents need to know the AI is working for them and not against them. Teachers and administrators need to have the power to overrule AI assessments where appropriate.

I realize that these scenarios might seem like widely disconnected fields, but I’m finding that they have some common mathematical threads. I accepted a long time ago that I might spend a lifetime working on Millennial Problems I’ll probably never solve and instead try to focus on making progress in small steps. I’m working to further my knowledge of Category Theory because I’m finding the language of maps and objects to be extremely valuable in modeling complex systems. I feel there’s a deep connection between Complex Analysis, Linear Algebra, and Topology that I’m just shy of understanding. I’m also starting to realize a need for better Statistical tools, particularly in the properties of different Probability Distributions used for modeling risks.

I’m very much open to graduate level research opportunities in Mathematics or Computer Science that would advance my understanding of the topics described above. If you’re in a department working towards these high level goals and have an opening for a research assistant, please feel free to reach out to me through any of my social media profiles.

One thought on “Research Statement

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.