AI in Education: Future and Equity
Key Points
- The panel highlights that AI’s role in education varies widely by socioeconomic background, with many students receiving little to no AI‑assisted learning.
- Current AI applications focus on personalized curricula, teacher‑level content curation, and back‑office operational support within schools.
- Experts stress the need to overhaul how AI itself is taught in K‑12 settings to prepare students for an increasingly AI‑driven world.
- Parents in diverse regions (Nairobi and the Bay Area) report that AI tools are slowly entering classrooms, but adoption remains uneven and often intentional.
Full Transcript
# AI in Education: Future and Equity **Source:** [https://www.youtube.com/watch?v=JSnwZmbTMXM](https://www.youtube.com/watch?v=JSnwZmbTMXM) **Duration:** 00:36:25 ## Summary - The panel highlights that AI’s role in education varies widely by socioeconomic background, with many students receiving little to no AI‑assisted learning. - Current AI applications focus on personalized curricula, teacher‑level content curation, and back‑office operational support within schools. - Experts stress the need to overhaul how AI itself is taught in K‑12 settings to prepare students for an increasingly AI‑driven world. - Parents in diverse regions (Nairobi and the Bay Area) report that AI tools are slowly entering classrooms, but adoption remains uneven and often intentional. ## Sections - [00:00:00](https://www.youtube.com/watch?v=JSnwZmbTMXM&t=0s) **Socioeconomic Divide in AI Education** - The panel debates how AI assistants will influence 12‑year‑old learners by 2028, highlighting that adoption will vary dramatically with socioeconomic status and locale. ## Full Transcript
it's 2028 3 years from now and you are
12 years old how much of your learning
is done with an AI assistant a lot a
little or none at all feder baderas is
the responsible AI leader for Consulting
feder welcome to the show for the first
time what do you think I think the
answer is it depends in particular on
one's socioeconomic background yeah we
will definitely be talking about that
Skyler Speakman senior research
scientist welcome back to the show uh
Skyler what do you think yes but a
little and that is intentional for my
three kids growing up in Nairobi kenu
awesome and last but not least is Marina
daneli senior research scientist uh and
I believe we're talking about your kid
here who will be 12 in 2028 what do you
think I my son will be and I think it'll
be a little also intentional even though
and maybe especially because I live in
the Bay Area all right awesome all that
and more on today's mixture of
experts I'm Tim Hong and welcome to
mixture of experts each weeke brings you
the analys hot takes and banter that you
need to keep up with the ever hectic
world of artificial intelligence today
we're going to focus the entire episode
as we get to the end of the year on AI
and education what that means for AI to
be used in education the risks and
opportunities and where we think it'll
go into the future so let's just drive
into it and fedra I want to turn to you
first to just kind of set the stage for
our listeners um I think Ai and
education is kind of widely hyped and
it's sometimes difficult to know what it
is that's going on and so I want to give
our listeners just a lay of the land to
start what are the big uses for AI and
education right now and where do you
expect it'll go in in the next few years
well um I think it's really important to
be thinking if you're thinking about
using artificial intelligence in an
education context first of all to be
focused specifically on personalized
learning and having customized
curriculum but then also utilizing it to
help teachers curate that curriculum to
be able to augment their dayto day
additionally there's all kinds of things
that happened in the the background back
office to help with operations but I
think in in addition to to having a
conversation about what is the
usefulness of artificial intelligence in
an education context I think it's also
important to have a conversation about
how do we need to be changing our
approach to how we're even teaching the
subject of AI in schools today and how
that needs to be changing going forward
yeah that's great I mean I guess your
short answer is it's kind of happening
everywhere front front of the you know
the educational experience the school
operations teaching people about AI um I
guess Bri Sky I don't know if either of
you want to jump in as both parents
yourselves I'm kind of curious about
what you're seeing on the ground with
your kids I mean are you seeing you know
teachers starting to use AI tools or
people being encouraged to learn about
AI kind of curious about how that's all
kind of playing out in your experience
so I am seeing it come out a bit more
with our students and sorry with our
kids and their teachers um I I think one
of my props takes on this is at least
for primary education this is an
opportunity for us to Channel all of the
people hours that ideally will be made
available because of with the Advent of
AI so I think it would be really great
to really keep the Personal Touch in the
primary education space because of all
of the other enablement we've had in
other sectors so I think really keeping
this balance between the role of AI and
that human connection from the in the
classroom uh so important for what at
least we are looking forward to for our
kids yeah for sure Marina what are you
seeing I'm kind of curious I mean you
know Skylar in Nairobi you're in
California very different places uh but
are you seeing the kind of AI sort of
wave appear with your kids education
yeah absolutely uh I think that there's
a lot to be said for some very
interesting uh games and gamification
that is going on in the educational
space so one thing I'll call out without
trying to be a sponsor is osmo uh really
great games that would not have been
available even a few years ago because
of the capabilities of like the uh
tablet camera to be able to see and
directly interact so it's this really
lovely mix of what's going on on the
screen but also being able to do things
that are physical for for spelling for
math for coding uh those kind of things
are really great same thing with
programmable robots botly and things of
that nature that's the kind of thing
that that's showing up as well so I
think that it's very interesting it
gives a lot more options for how kids
could be exposed to these Concepts and
that seems to be a good thing since kids
learn differently yeah for sure I think
it's one of the things I'm most excited
about is like you got all these options
for learning the same topic now which
feels which feels really interesting uh
feder let me ask you this for you know
we're looking ahead to the next year you
know 2025 I think we're going to hear a
lot more about Ai and education you know
what is the big trends you have like the
one thing where you're like wow this is
really the thing that's going to kind of
knock people's socks off in the next 12
months um kind of curious about what our
listeners should be paying attention to
oh well I I was interested this week to
seeing a an article come out of PBS and
their use of artificial intelligence in
particular uh enabling children to have
conversations with some of their
favorite characters in the PBS learning
shows um they are not using generative
AI but more traditional forms of AI um
and it's specifically targeted towards
younger kids who naturally talk to the
TV
uh so I I thought that that was really
interesting I think we're we're going to
see uh more very clever ways as Marina
said about the intersection of of play
and
AI um and I I'm really looking forward
to to seeing what how that shapes in the
realm of education and in particular
ways of being able to harness that to
address more Equitable outcomes in
education yeah for sure and I do want to
talk a little bit about some of the
concerns here I think fader do you want
to go into that point just a little bit
more I know you mentioned it in your at
the top of the episode as well as kind
of these concerns ultimately about you
know the equity of these kinds of tools
it's really important that we're
teaching people how to be critical
consumers of Technology at large it's
really really important and in
particular how to to teach people what
is the real nature of AI and the real
nature of data what one of my favorite
definition of the word data is that it's
an artifact of The Human Experience
right we we humans we generate the data
or we make the machines that generate
the data but it's important to recognize
we humans we have over 180 biases in
counting so what's really interesting
about AI is that it acts as a mirror
that reflects our biases back towards us
but we have to be brave enough and
introspective enough you know to look
into the mirror and decide does this
reflection actually align with my values
my organization's values if it does it's
important to be transparent why did you
pick this data why did you pick this
approach and if it doesn't align that's
when you know you need to to change your
approach and it it goes back to the
conversation I had up at the the top of
the hour about not just having a
conversation on how AI can be used to
transform education but how we really
need to be teaching this in schools
because if you're lucky enough to be
able to take a class in on the subject
of AI or AI ethics or data ethics you're
probably in a higher institution and you
have self-categorized as a coder or a
machine learning scientist or a data
scientist and literally not everybody
else on the planet um so I think we need
to be thinking about how do we bring
this kind of a curriculum that's
holistic and multidisiplinary
uh much earlier in people's academic
careers in fact you know I see no reason
why we should be teaching this in middle
schools and in particular in social
studies class versus computer science
class which I think is where this
subject ultimately belongs that's great
and we want to I want to get some more
kind of concerns out on the table I
think I do want to talk a little bit
about how we approach these sorts of
issues and how do we address them I know
both Marina you and Skyler said well
hopefully a very little and that's kind
of by choice in terms of you know kids
using you know AI to learn and and
specifically kind of AI assistance was
kind of how I I had teed it up I guess
marina maybe I'll I'll I'll choose you
first and then we'll go to Skylar is why
is that I mean what are what are your
concerns there why would you want to
limit sort of like access to these tools
as a as a way of learning well I think
because uh the nature of the tools right
now if you actually we're going to
generative Ai and not sort of the more
traditional MLA aai is that it wants to
adjust itself almost a little too much
to the person and it's a good way to
fall down rabbit holes that kids are
maybe not yet very well equipped to
handle there needs to be some structure
around that so on the one hand it's good
to have the adjusting personalization on
the other hand it can be dangerous so I
hope that there's going to be a decent
amount of oversight for that kind of
thing and a way also of doing critical
thinking so I think that from a very
early age what kids can be taught again
in a gamification way is how do you
trick it how do you break it how do you
make it lie to you it's vs to tell you
the truth it then you you start to
really understand even as a kid how to
set the expectations that it's not an
oracle it's more potentially like Loki
and like the trickster and and see that
that's the maybe the kind of back and
force mildly antagonistic relationship
you might want to have with it it also
it'll help with critical thinking yeah I
love that I like part of the education
here is like getting kids to break these
Technologies it feels like uh very very
rich and something we should talk more
about um Skyla do you want to get I'm
not sure if you share Marine's concerns
or if you kind of think about your
worries about this technology in a
different direction first of all plus
one on the gamification I think that
really is such an important catch uh for
these kids going into this technology
space uh but I I do want to give this
example that I saw earlier on Facebook
this week I think of just a really great
balance between generative AI technology
and classroom leadership one of my
friends from undergrad is a is a primary
education teacher in the US and he had
this really cool post on Facebook where
he generated some prompts that he used
in his class that wrote Act One of a
play and and then his students were
going to act it out and the students
then had to write act two of the play
and so having that type of dynamic
leadership in front of the classroom
with content coming from both generative
Ai and from the the kids themselves
playing back and forth on that I thought
that was just a really cool example of
balancing the roles that come from
generative AI social interactions and
Leadership from the classroom so yeah
shout out to Don Donnie piery on that um
and skyar I guess to kind of close this
kind of section before we talk a little
bit about you know ways of addressing
these types of concerns I'm curious if
there's any other items you might want
to throw on the table I know you know
fedra talked a little bit about the
equity concerns here we just talked a
little bit about kind of like dependence
and personalization as things that we
might worry about I'm curious if there's
other things that you might want to kind
of put on the table that come to mind
you know as we we think about how to
responsibly sort of deploy this time of
tech yeah I think that those are some
really great examples on the the
responsible and the sorry the the equity
angle in particular um my kids do go to
a private school here in Nairobi Kenya
and that is looks quite different from
uh the global majority from around the
world um you know uh north south east
west and so I think making sure that
that is recognized and top of mind for
how these things are are deployed um
across schools of all sorts of
socioeconomic backgrounds uh is a is a
key point that F just started off the
conversation with I I do a lot of
volunteer work with with the Girl Scouts
and we have used games to introduce the
girls to things like algorithmic bias
but then it's I think it's important to
have conversations with them like you
know give me examples of where AI has
delighted you now give me examples of
where you were playing around with a an
AI and the output made you feel really
bad like you knew it was wrong or you it
didn't make you feel good and listen to
what they say it it is I think really
telling when you invite a a a young
person to be critical consumers of the
tech and really be thinking about things
like disperate impact or unfair outcomes
it's it's very very telling and again it
goes back to what I was saying at at the
at the onset of this conversation which
is about this is far more about social
studies like whose worldview is actually
being depicted in this AI model Beyond
just you know can I trust the outputs
whose worldview is in this model model
is being reflected also teaching them to
ask critical questions like who's
accountable for this model how much
better does it perform compared to a
human
being uh it's just I I think these are
all important things we need to be
teaching the Next
[Music]
Generation I think that's a great lead
into the next segment which is you know
thinking a little bit about how do we
kind of address some of this right these
kind of concerns in the tech technology
and I know fedra you've been thinking a
lot about these issues done a lot of
work on it in the last few years uh in
particular I know you know right before
this episode I was reading a little bit
more about your work with smarter
balanced um and I'm kind of curious if
you want to talk a little bit about your
work there and kind of how it applies to
some of the issues we've been talking
about yes um this Ed tech company out of
the state of California they were
interested in addressing inequity in
traditional educational assessments
there's been a lot of research that
shows that traditional educ assessments
are inequitable for a wide variety of
different ways uh reasons including you
know English might not be your primary
language or you might suffer from test
anxiety or you might be neurod Divergent
there's many countless reasons why
traditional tests might not work right
so they wanted to dive into uh
discussing or experimenting on whether
artificial intelligence could directly
address some of this
inequity and so uh one of the things
they tasked us to do was to form a think
tank that included students from all
over the world and teachers in
elementary middle school and high school
as well as people who had leadership in
neurode Divergent communities Etc ET we
we pulled together this Think Tank and
really dove into some very specific use
cases for these AI models like if you
were going to use an AI to to be able to
ascertain the skill set of let's say a
sixth grader's ability to comprehend a
passage of text and have conversations
deeper conversations about that passage
of text right what would unintended
effects of such a model be what are
unintended effects and then given those
potential categories of harm and the
principles that this Think Tank came up
with how would you detail what are the
functional as well as the nonfunctional
requirements needed to to be seen in
such a model and the the principles that
the think tank came up with I think were
really interesting like IBM for example
you know we detail fairness and
explainability and robustness against
adversaries and transparency and data
privacy right this think tank when
thinking about that these AI models are
going to be used by children included
principles like kindness and data
sovereignty and
agency and so a lot of the work was
thinking through what does it mean for
an AI model to reflect a principle a
human value like kindness what does that
look like in terms of feature and
function it was absolutely fascinating
work and that report is being made
public yeah fedra I think that's great I
think one of the things I'm really
excited to see is all of these groups
starting to articulate a lot more
crisply like what are the values that
they want out of these Technologies and
I think that's such important work
because it helps to kind of like really
set up the the goals right like what do
we need to do in order to make sure that
like these systems are um doing what we
want well part of it is we need to know
what we want in in the first place um
Skyler I know I wanted to give you a
chance to give yourself a little bit of
a kind of like travel report I know you
were at the AI safety institutes
conference um which as I understand is
very much involved in kind of the
process of trying to develop evaluations
and standards for the space um did any
of these topics come up I'm kind of
curious about how that might plug into
what we're talking about here uh yeah I
think it came up in two ways one was
directly with education as a use case
and the second one was a bit more
indirectly which is what are these kind
of international AI safety institutes
doing for capacity building and
awareness so we've already hid on these
kind of two topics already about how
important it is for these young
consumers to be critical about that
technology that's the capacity building
and awareness side and a bit more kind
of on the policy side the this kind of
uh technical gathering for these AI
safety institutes were really trying to
spell out how we do risk assessment
everything from the you know the doomers
you know end of the world type en uh
scenarios and addressing the day-to-day
harms that we already see in these
deployed models uh so it was really a
fascinating couple of days between
technology um technology experts uh
academics and policy makers is trying to
come together and put language um so
that in Paris in a few months from now
in February uh you know these uh these
countries can come together and sign
these multilateral agreements about
where they want to prioritize AI safety
again from education from Health Care
from Market competition really really
cool space uh to uh to be a part of uh
and that all just concluded uh last week
in in San Francisco um I was there
representing the uh the Kenya delegation
uh quite quite an interesting event yeah
that's really exciting and yeah I think
part of it is you know especially in the
US right education is kind of regulated
such on a regional level it's excited to
hear that kind of like at the
international level we're trying to
develop these kind of global standards
you used two key statements there
regulating and standards and the
Secretary of Commerce presented at this
conference and she was incredibly clear
the AI safety ins students are not
Regulators they are there to catalyze
and provide standards so it was a really
really cool conversation to have in
there so um both both of those roles
have both of those areas have a role to
play but these AI safety safety
institutes are much more about
catalyzing and forming standards and not
yet on the regulator side um so marina
maybe I'll present to you maybe a little
bit of a a hard question that I've been
kind of mulling over um you know I think
as we've talked about right I think
there's like huge opportunity with
technology there's certainly risk
um but I think there's a lot of work
being done to try to mitigate them um
but I'm sure some of our listeners will
be kind of listening to this episode and
saying well there's maybe one thing
which we haven't talked about which is
can someone just like refuse in the
future to use AI right I feel like do
should we give students kind of the
right to entirely opt out of AI entirely
um it seems like you know a lot of the
discussion we've been having here is
well the technology will be here we'll
just have to kind of mitigate its risks
but curious about what you think about
that is like you know should that be
something we're trying to protect right
as we build this new educational
ecosystem or you know ultimately is it
very challenging just given kind of how
AI appears to be you know headed to be
ubiquitous in the future well actually
that's interesting because I would ask
you what do you think would be the
motivations for a student to decide to
opt out because I can see a couple of
things it can be parent driven it could
be because a student sees that they want
their you know their voice to remain
theirs and not have any AI assisted
anything I mean again can you learn
things without AI yeah we've been doing
it for a while so probably um what would
be the motivation do you think for for
opting out I would say there's probably
a lot of fear of just like the
technology itself right which is to say
um I don't know much about it right like
I learned the oldfashioned way I I'm
sure I'm could imagine that being a very
strong incentive like I learned with
books right like I don't know why we
need these new AI assistants um you know
I I think that's probably one of the
risks uh I'm sure there's also like a
privacy risk I'm sure some parents say
where is all the data about my kid going
do I have any control over that so
you're right I think that there's a
couple reasons why someone might be
concerned about it but I think you know
like any of the new technology I think
there's just like a lot of fear over
what it is and what it might be doing to
your kid right the data is a really fair
risk although that's I think that's
something that maybe parents understand
better than their kids do especially
today's kids they've grown up not not
even thinking about the fact that
everything they do is online um but the
idea of what does it mean to learn with
it I think this goes back to a lot of
interesting things that feder had
pointed out of are you going to be
subjected to biases without even
understanding that you are are you going
to be re ending up in some sort of an
echo chamber are you going to not have
the breadth and depth of uh Concepts uh
that you are trying to go through that
like a human might find times that are
the appropriate time to to push back to
stop to pause to redirect and AI is not
going to do that most of the time the AI
assistants what they really want to do
is keep hurdling along at speed in the
direction that they've been pointed at
least so far Maybe things will change so
on the other hand part of Education
needs to be how do you function in
society and even if you opt out you do
need to know how to handle it when it
comes your way or when it comes the way
of your friends or your family so even
if you have that critical of an eye I
think it's not great to say I'm not
going to learn it's like I'm not going
to learn to follow traffic signals well
I guess you can opt out but it's
probably not a very good way to be a
part of society so you at least have to
learn about it even if you don't want to
fully participate yeah and I think this
is the third topic I really did want to
touch on is kind of we're now moving
away from sort of the AI being the
teacher here to kind of the the the
difficult questions I think really
interesting questions I think around AI
literacy right which is well you know
you might opt out but we actually think
it's really important because you need
to know how to work with these systems
in the future um Skyler you're smiling I
guess you might want to jump in well I
think I was just reflecting a bit do you
think that opt out conversation is
happening at the family level at the
classroom level at the school level I
mean where where do you think maybe not
the opting out but the decisions to
really kind of you know engage with this
technology how how do you see that uh
how do you see that working out in a
practical level what what level of
decision-making do you think is going to
drive that type of adoption yeah I think
it's complex and I could see it I mean
the short answer is I think I can see it
emerging across any of these options
right like a school district might say
this is untested we're going to opt out
I could imagine a parent saying I don't
trust this technology we're going to opt
out I could also imagine a kid just
saying hey I don't learn great this way
you know how I learn best I learn best
with books right I want to opt out and
so I can see it happening across all
those levels I know fedra you're right
in the middle of it I don't know if you
want to jump in and kind of respond I
would say the the reason why an
individual or a group or a school or a
state would want to apt out is because
they don't trust it they don't trust and
there are many reasons why someone might
not trust an AI model and earnestly it
takes a lot of work to earn somebody's
trust it it takes a tremendous amount of
work and it's not strictly a technical
problem at all it is a social technical
problem and with any soot technical
problem it has to be approached in a
very holistic way First beginning with
accountability like do you actually have
a group of individuals who are being
held accountable for making sure that
this model is behaving in the way that
it's intended to be behave are they
being
transparent about this model and again
the worldview that has been embedded
within this model the data was it
gathered with consent is it
representative of all the different
communities which have to be served in
an educational system is it the correct
data to use according to real domain
experts who understand the context of
this data and the relationships between
this data and I I'll tell you I think
it's very unfortunate that that so many
organizations I think are ill prepared
to be held accountable for these models
and again it goes back to why the
emphasis on AI literacy and really
understanding what is the level of of
effort that is needed to put into these
AI Solutions in order to be able to earn
people's trust and honestly the hardest
part as I said the hardest part is not
technical the hardest part is the social
part and making sure that You' you've
got the right organizational culture and
the processes in place as well as the
tools and AI engineering Frameworks to
do this work in a responsible way yeah
for sure and I want to unpack that a
little bit more fedra I think you know
what does this look like exactly A
literacy and practice I mean is it okay
districts okay parents okay kids like
here's a here's a curricula right like
you have to you have to go through the
AI 101 class or is it something else
that you're envisioning oh heck no no no
this it first of all it has to be multi
disciplinary it really now when I say
multidisciplinary I mean like get it out
of just strictly computer science class
and you know have it be where you're
you're bringing in uh School schools of
philosophy schools of government it is a
truly interdisciplinary and the the
challenge I think at least within the
United States I'm not going to speak for
other countries but uh Public School
Systems even higher institutions within
the United States have been extremely
siloed with respect to how they teach
disciplines like artificial intelligence
as I mentioned at the beginning like if
you're lucky enough to take it right now
you're in a School of Engineering most
likely and you're not bringing in
linguistics professors you're not
bringing in philosophy professors to
talk about worldviews and ethics or even
disperate impact to give an example I've
come across uh AI practitioners who are
developing AI models to do something
like offer uh predictions on what
percentage interest rate people should
be given with respect to a home loan
that don't know what the word redlining
is they've never heard it before and
again this points to why we desperately
need to have a multi-disciplinary
interdisciplinary approach to how we
teach this subject in other words AI is
not the death of liberal arts education
if anything it's more important than
ever that's right she's right she's
absolutely right and and even when you
look at generative AI look at how much
it's being used to do coding now what
does that mean in terms of the the
programming
profession whereas now people are saying
we need more English Majors to be able
to craft the the right
prompts right so it's she's
right liberal arts education is now more
important than ever that we understand
what what is inequity what is human
history what is disperate impact how do
we approach ethics in a way that's
holistic and representative of all the
people that we need to serve I'm just
now so much more optimistic about my
undergraduate liberal arts
degree yeah thanks it was all worth it
yeah yeah for sure I mean and and I
guess I don't know it strikes me fedra I
don't know if you'd agree with the
statement that the stakes are pretty
high here in terms of getting this AI
literacy bit to work properly um because
it does seem like look you know
irresponsible deployment of the
technology could lead to some kind of
incident that really reduces public
trust that means there's going to be
less use of that technology going
forwards less opportunities to show that
the technology can really create real
benefit um it almost feels like the the
kind of like getting the trust in
education bit is going to be the thing
that kind of like ensures that we can
actually get to all the opportunities
that we've been talking about here I
don't know if you you'd agree with that
all I I think in order to be able to get
to the opportunities that we're
describing that where you're creating
models that earn people's trust you need
to educate people on what the heck we're
even talking
about like I said what is the real
nature of data because interestingly
working with the clients that I do so
often real domain experts who
desperately need to be part of the
conversations and have a seat at the
table their perception in their mind is
I'm not a machine learning expert I'm
not a data scientist I I I don't I don't
have a degree so why do I really belong
here that's not really my swim Lane and
that's what we've been communicating to
people for decades is that they don't
belong which in fact they desperately do
we desperately need to have hear their
voice at the table and even addition to
those domain experts again where you're
your trying to solution something in
their domain like I mentioned you you've
we've got to have far more diversity
inclusivity in terms of who's developing
these models and the systems of
governance around these models and that
I don't just mean gender race and
ethnicity but earnestly people who have
different lived World experiences coming
to the table to have discussions about
does this artificial intelligence is
this solving the problem is it
reflective of all these the needs of a
wider variance of human beings what are
the unintended effects of these models
how do we Design This in a way to earn
people's trust and as I mentioned these
aren't strictly technical challenges
yeah for sure um reeno I'm curious how
you kind of respond to all this you're
someone who spends a lot of time
directly in the research um and you know
I'm I'm sure like again when I talk
about this with some of my friends in
the machine learning space they're like
this is overwhelming we're like just
trying to get these models to work now
you want to worry about all this other
stuff um and I guess I'm kind of curious
is like do you think like in effect I
think what fedra is proposing is that
you know people who do machine learning
in the future will look really different
right like from the people who are
marginally doing it today um and in part
it'll be that like they will have to be
so strenuously
interdisciplinary that like I think it
might end up looking quite a bit
different from kind of what we expect at
like you know an icml or you go to a
kind of technical conference today I
don't know if you'd agree with that we
used to think that only specific
uh people needed the training to you
know learn calculus and that wasn't
because you're going to be doing
calculus forever it was just because you
needed to learn what it is and how it
shows up and what does it mean to have a
structure and a proof and things of that
nature I'd make a plug uh to join fadra
social studies class statistics early
statistics statistics early statistics
often because part of what you really
need to do is understand how do these
models even remotely work just an
intuition not the deep math but that's
what's going to help you combine that
along with uh your work in linguistics
your work in uh history your work in in
language and and all the rest of it I do
find my own um slightly more liberal
arts background coming up a lot when it
comes to trying to talk to people with
examples that they can understand uh but
also again intuition from my stats
classes comes back time and time again
the explanation of what do these
generative models do they're playing
Guess the next word simple things they
might not be completely accurate but
simple things don't try to boil the
ocean if everybody has just a little
more intuition and then you're going to
be more effective again another example
look at cars none of us understand how
they work but we understand how to drive
them we understand how to regulate them
we understand in general how we we live
with them and use them and and what the
effects are it'll get to that point so
I'm I'm not worried I just hope that
we're not going to be rushing it it's
going to take a little time for this to
become pervasive and be become natural
and sort of second nature to the to the
point about how this is going to take
time again look at the Traditional
School Systems today and how siloed the
approach is and how hard it is to get
these different schools to actually work
together on a collaborative curriculum
like that I think is is what's going to
be the hardest thing to move yeah just
last week I was helping my my
10-year-old make a probability wheel
which is a spinner and it can fall in
one of these things and then I told him
that you know his dad me I do
probability day in and day out at my job
and I could just see his wheels spinning
what what do you mean you know you spin
this probability wheel but it goes to
Marin's point about starting those
conversations early and the the
importance of that type of of of
background and intuition um I'm seeing
it play out already in some of these
some of these young lives so um yeah
again just a a great comment Mara and uh
backing that up with a real world
example from just a week ago yeah that's
great I love your kid imagines like you
just sitting in your office with a bunch
of Wheels spinning them exactly he
couldn't quite get it but I told him
that this is really important and I use
this on a daily
[Music]
basis all right for our last segment
it's the end of November we're starting
to think about the new year I want to go
around and just ask each of you to kind
of tell us your greatest hope for the
new year if you could change one thing
what would that be and uh Marina I think
we'll start with you as much as possible
uh get teachers up to speed and educated
and comfortable and able to own what's
going on they are after all the folks
that drive how it's really used on the
ground and any way that we can offer
support to to teachers to meet them
where they are and make this be
something that's positive in their
classrooms that's a great one uh Skyler
you next doubling down on supporting the
teachers but with
their outside the classroom with their
extra work you know that the the that
sort of stuff I think there are some
some areas that could be lifted off them
uh to make them so much more impactful
and involved from the front of the
classroom so I think ai's got both the
role to play helping teachers from the
front of the classroom but also I guess
what we' call back office stuff as well
that could really really change the
lives and aspirations of teachers that's
a great one and last but not least fedra
well as I mentioned I won Ai and social
studies class and I want it taught much
earlier like like I said middle school
if not Elementary School you could twist
my arm but then also uh I I would love
to be able to see more schools making a
concerted deliberate effort to make more
room at the table pull the seats out and
invite students who don't see themselves
as being technologists and say hey
having a conversation about Ai and what
it means for you and does it reflect you
is core to you having a seat at this
table to be a a critical consumer of
this Tech that's something I would
desperately want to see within the
coming years fedra Marina Skyler thanks
for joining us uh and we'll have to have
you back on in 2025 to talk more about
this and thanks to all of you listeners
for joining us if you enjoyed what you
heard you can get us on Apple podcasts
Spotify and podcast platforms everywhere
and we'll see you next week on mixture
of experts