Turning Sound into Matter

Transcriber: Amanda Zhu
Reviewer: Rhonda Jacobs

My name is Marcus Buehler.

I’m the McAfee Professor
of Engineering at MIT.

And I’m also a member of the Center
for Computational Science and Engineering

in the Schwarzman College of Computing.

In this talk, I’ll be talking
about the nexus of materialized sound

and sonified material.

We’re going to be talking about
how vibrations, sound, and matter interact

and how we can use music
to design new and better materials.

If we’re thinking
about biological structures,

such as a spiderweb,

we can see they’re very detailed,
very intricate, very complex structures.

If we look in a spiderweb -

in this case, a 3D spiderweb -

there are many internal structures

that go really from the macroscale
all the way down to the nanoscale.

We’re now flying inside the web structure,

and we can see that this web
has very complex architectural features.

As we go closer, we see more and more
of those architectural features

emerge and become visible.

If we go even closer, we can look
inside each of the silk filaments.

We can recognize

that each silk filament itself
consists of a hierarchical structure.

This hierarchical structure
ranges from the molecular scale,

the individual protein molecules,

which are assembled atom by atom

to form secondary structures

to form tertiary structures
to form bundles of proteins,

ultimately forming filaments,

assembling into bundles
of filaments and fibrils,

then forming the filaments,

the silk fibers
that you can see in the web.

So you can see that the web structure

really has a structure
that goes from the macroscale

all the way down to the nanoscale.

How are these materials built?

Well, these materials are built in nature

by encoding structural information
through the genetic sequence,

usually encoded by DNA.

These DNA letters encode information
about how proteins are built.

Proteins are built from primary sequences:

these genetic information letters
forming sequences of amino acids,

forming secondary structures
such as alpha helices or beta sheets,

and these in turn form
more complicated structures,

such as collagen in our bones,

spider silk consisting of beta sheets
and alpha helix mixtures,

to also more complex structures

like viruses.

What you see in this slide,
in this picture here,

is a pathogen of COVID-19,

which has these spike proteins
sticking out on the surface,

which give this virus its name,
the coronavirus, or crowns.

This coronavirus is encoded
by sequences of amino acids,

encoded by letters of RNA or DNA,
genetic information.

This genetic information
provides the building plan

for how this virus is actually built.

Just like the virus
is built from the bottom up,

forming hierarchical structures

across different length scales
and time scales,

we also know that in engineering,

we might be able to use
such an approach as well.

Thinking about an architectural system
like the Eiffel Tower,

you can also recognize
that this system has features as well

that go from the macro-
all the way down to the nanoscale.

Even though engineers have been using
hierarchical principles

for an extended period of time,

we have not yet been able
to tune simultaneously molecular scale

all the way to the macroscopic level.

One other feature
that’s really interesting

is a unifying theme and feature
across different manifestations of matter.

And that is the equivalence
of vibrations, to matter, to sound.

The universality of waves and vibrations
is something we see in molecules.

We can recognize that
at the quantum mechanical level,

we can describe matter
as collections of waves.

We can also see

that sound is an overlaying
of sine waves, harmonic waves,

to create more complicated
sound structures.

And we can also see that spiders,
for instance, use waves

as a way of communicating
and understanding the environment.

Waves, sound, vibrations are universal,

and we can use perhaps
vibrations and sound

as a way of defining material models,

optimizing materials,

and even inventing entirely new materials

by using vibrations.

Here we show how we can evolve
the way hierarchical systems are built.

Thinking about a spider,

a spider uses vibrations
as a way of sensing the environment,

communicating with other spiders,
sensing threat, detecting prey,

and many other things.

They use the signals they collect,
process it in their brain,

and make decisions -

make decisions about how to build the web,
just like an autonomous 3D printer.

They build webs by assembling
materials in space,

depositing materials in space,
repairing the web,

and interacting with other spiders,

forming an autonomous material system,
a smart material system,

an intelligent material system.

Humans operate in a very similar way.

When humans build things,

when we create a painting,
play an instrument,

we sense the environment,

we make decisions about what to do next,
what kind of tool to use.

When we’re thinking about wood carving,

what kind of action to do next
to create a certain pattern.

We play an instrument -

we decide on what key to play next
depending on what we hear.

These kind of processes
are very similar to what the spider does.

The question is, can we incorporate
some of those feedback mechanisms,

some of these autonomous ways
of creating materials, of creating matter

through sensing, processing
information in neural networks

and creating new things from it?

Can we utilize those and implement those
in technological solutions

to create materials that aren’t static
but materials that are alive,

that can interact with the environment
in innovative and novel ways?

In fact, one way to do that
is to translate matter -

because matter has equivalences
to vibrations into sound -

and use sound as a way
of designing new matter.

The way that we do this
is we have a material composition,

a material structure -

we can understand it
as a set of vibrations -

we can compute the set of vibrations,
make it into audible sound,

and manipulate the sound.

We can make new sound,

we can change the sound,

and we can then use a reverse translation
to move sound back into matter.

By doing this, we solve
the design problem,

which really consists of assembling
a set of building blocks,

kind of like Lego building blocks,

into structures.

In the case of sound those building blocks

are sine waves or instruments
or melodies or keys on a piano.

We can assemble
complex pieces of structure,

complex pieces of sound,
complex melodies,

simultaneously played,
intersecting, interweaving,

and create really
complicated designs in sound,

which then we can translate
back into material.

So the question is,

what kind of material
would a certain composition,

like from Bach or Beethoven,
maybe represent?

Can we utilize this idea

in designing entirely new materials
that nature has not yet invented?

Can we come up with engineering solutions

to sustainable materials
that we cannot otherwise obtain?

Sound is a really elegant way
of capturing multiple levels

in the material organization.

We call it a spiderweb.

It has many different structures.

If you recall, we were going
from the big, large scale into the web,

and we can recognize from the beginning

the architectural levels,
structural details,

all the way down to the molecular scales

and the individual atoms
that make up the amino acids,

which are the building blocks of proteins.

These amino acids to proteins,
to assemblies of proteins,

to filaments, fibers
to the entire web architecture

is a really complicated puzzle.

By using sound, we can hear simultaneously
all these different levels.

Each level contributes
a particular type of frequency spectrum.

By listening to it, our ear, our brain
can process the information,

and we can design
new hierarchical structures,

just like in music.

If we think about matter and molecules,
let’s take a closer look.

If you open a chemistry textbook,

most likely you’re going to find
a drawing of a molecule,

like benzene in this case.

These kinds of models change over time,

but I would say they’re all wrong

because these pictures
in a textbook are static.

They look like static drawings,

when in fact, molecules
are continuously moving.

They’re vibrating;
they’re moving all the time.

These vibrations and movements

is actually what defines
the structure of these molecules.

Each molecule has
a unique fingerprint of sound,

just like you can hear here
the vibrations of a guitar,

you can hear the vibrations
creating what we call music.

(A few notes on a guitar)

In a similar way, vibrations of molecule
also have a unique sound,

and we can make it audible

by transposing the frequencies
into the audible range

so that our brain
can process the information.

What you hear here is the sounding
of a complex protein structure.

(Electronic music)

The protein is vibrating all the time.

It’s continuously moving.

These movements and motions
can be made into audible sound,

just like playing multiple guitars,
multiple instruments,

and multiple structures
in musical composition.

By having a model of a protein in sound,

we can begin to understand
the protein better,

have another way
of understanding structure,

we can very quickly process information,

we can understand
questions like mutations,

we can understand how proteins
might change the folding geometry

as mutations happen,

we can understand
how diseases might be treated

by developing antibodies or drugs
that bind to the protein.

All these aspects can be very easily done
and heard in sound space.

One discovery made recently
is that each of the amino acids,

the 20 natural building blocks
for all proteins, called amino acids,

have a unique sound.

They have a unique fingerprint.

In other words, they have
a unique key on a piano.

They all sound different.

What you hear now is the sound
of each of the 20 amino acids

going from the beginning to the end.

(Electronically generated sounds)

These are the sounds of life.

These sounds can be utilized
to build models of proteins;

in fact, what you hear now

is a musical representation
of the spike protein

of COVID-19’s pathogen.

(Slow, string music)

This is a very large protein,
with about 3,000 amino acids.

Because the protein is so big and has
such a complicated folding geometry,

the musical composition that results
from this protein to reflect its structure

is very long;
(Music ends)

in fact, it’s about one hour
and 50 minutes long.

The protein itself
is hierarchical in nature.

It has primary sequence,
as we’ve talked about before,

encoded by the genetic
information of the virus.

Again, there are 30,000
basic levels of information

in the genetic code of the virus.

3,000 of these encode
this particular protein.

Then we have secondary structures

like alpha helices
and beta sheets and random coils

and other structures as well.

These are then folded
into complex geometries.

The resulting music
is a very complicated piece

because we have many different melodies
weaving into another,

creating what we call
in music “counterpoint.”

Counterpoint is a concept
introduced and used very heavily

by Johann Sebastian Bach, for instance,
a couple of hundred years ago.

So he has already utilized

some of the structural features
we find in proteins.

By using sound or music
as a way of modeling proteins,

we can build very powerful coding models

that we can use in
artificial intelligence applications.

In fact, in recent work,
we have used proteins to build data sets

to represent thousands and hundreds
of thousands of hours of music

that reflect these proteins

and train artificial neural networks
to listen to them.

These AIs can then generate new music
based on what they have learned.

These new musical compositions

can then, once generated,
be translated back into proteins

because we have a unique mapping

between the protein sound
and the genetic information.

So we can go from protein,
from material to sound

through the understanding
of the equivalence of waves and matter.

We can then use waves, or sound,
as a way of creating new sound,

to editing the sound,
to manipulating the sound,

to coming up with new design solutions,

not only by human, but also using AIs.

And we can use the new sound,
then translating that back into material -

so we can materialize sound.

This nexus of matter and sound
is very exciting

because it allows us
to use different techniques

to solve various design problems.

In the case of COVID-19,

one of the design problems
we’re after, of course,

is to think about ways
of creating antibodies,

molecules of proteins that can bind
to the protein in the virus

more strongly than the protein
can bind to the human cell.

What you hear now is one of these proteins
that we have generated using AI.

(Violin music)

And you can see in the picture
how this protein looks like.

This is a protein
that nature has not yet invented.

Now, how do we create this?

We listen to many different kinds
of coronavirus spike proteins,

different species,

different evolutionary stages
of the coronavirus,

not only the current COVID-19,
but many other coronviruses.

We then let the AI method
generate new music

that reflects the innate structures
in these particular type of proteins,

which are all spike proteins in viruses.

And the resulting piece

is a composition that reflects
a protein geometry, a protein sequence

that has something to do
with these coronavirus spike proteins

but has not yet been found in nature.

This kind of composition,
this kind of sequence,

might in fact hold the key to an antibody

because it matches the types of sequence
that we find in the protein,

in the genetic information.

(Music)

Here you can hear a piano composition
that reflects the moment of infection.

This is a protein structure that resembles

the moment when the virus spike protein
attaches to the human cell.

During the attachment process,
(Music ends)

the protein changes
its orientation slightly,

and you can hear this attachment

in a slight change in the spectrum
of frequencies and vibrations,

and you can make it audible through music.

So music here provides a microscope
into the world of molecular motions,

into the world of infection, detachment,
and the interaction of the virus

ultimately with the human body.

Vibrations can also be seen
in other manifestations;

for instance, in surface waves.

Water waves in a lake
is a very common phenomenon;

in fact, this phenomenon of having sun
shining on a lake or on water bodies,

having waves creating
surface waves in the water,

and seeing the glittering
of this resulting product

is something that’s been
very important in human evolution.

Humans use these glittering concepts
as a way of finding water -

not only humans do that,
but many animals as well.

It’s a way of detecting water -
by using surface waves.

So we’ve been trying to see

whether we can think about
using the deeper structures

of water waves, surface waves

generated not only by wind loading
or other environmental influences

but also generating those

through the mechanical signatures
of vibrations encoded in the proteins.

So we’ve created an experimental setup

where we can excite water

through the innate
vibrations in the protein

and make them visible.

You can then see at the macroscopic level

with your eyes

how these proteins excite water

and what kind of
unique patterns they form.

Turns out different protein
states, different vibrations,

we can see the different patterns formed
with our eyes from the molecular scale.

It provides yet another way
of visualizing nanoscopic elements,

nanoscopic events, nanoscopic features,

not only with our ears, like in music,

but also using our eyes
by looking at wave patterns.

These wave patterns can distort reality.

As shown here in this animation,
(Music)

you can see how we have used a camera
to film the surface of a wave

and watching the reflections
off the environment,

in this case, trees and brushes
in a snowy landscape.

Because there’s a slight
wind loading on this water body,

there’s slight surface waves,

and these surface waves
distort the image recorded by the camera.

(Music ends)

So even though you can recognize
the image, there’s a slight distortion.

This distortion, the inceptionism

of creating a different image
based on an environmental influence

is something we’d like to explore

and see whether we can use
a similar concept

to see how reality
might be distorted or changed

by visualizing
protein vibrations in water.

Imaging water waves
generated by protein vibrations

is in fact a powerful way
of detecting proteins.

What we’ve done here is we have selected
a number of different proteins

and visualized them in water waves,
in water surface waves,

and then trained the neural network
against thousands of images

for each of those proteins.

What the neural network can learn
through this training process is:

What are the wave patterns

that are associated
with each of the protein structures?

This is how it looks like
for one of the examples.

You can see there’s a really interesting
innate pattern forming on the surface

because of the protein vibrations.

So these mechanical vibrations
of the proteins

are causing these surface waves,

which in turn create
very interesting patterns

that can be picked up with the eyes
or with a high speed camera.

Each protein has
a unique spectrum of vibrations,

as I mentioned earlier.

You could hear that
in the music I’ve played.

Here is a graphical visual
representation of the same idea.

You can see in this bar chart
the fingerprint of two different proteins.

On the left-hand side,
it’s a protein called 6m17,

which is the situation

when the COVID-19 pathogen
is bound to the human cell.

On the right-hand side,
you see a protein called 6m18.

It’s the case when the virus
is not attached to the human cell.

So on right-hand side, not infected;
Left-hand side, infected.

This protein

is a very particularly important aspect

of understanding the infection process
of COVID-19 into the human body.

We’ve trained a neural network
against many different proteins

and detected surface waves.

We can do another experiment now

and film or record photos of surface waves
associated with different proteins

and use the neural network to classify

what kind of protein
has caused these surface waves.

In fact, the method works really well.

You can see on the left-hand side,
it’s a protein called 107m.

This protein is shaded
in a brownish color.

And you can see in this bar chart,

the highest probability of prediction
for this scenario is the brown color,

which, in fact, reflects
this particular protein, 107m.

It’s by far the highest probability.

So the model is perfectly able
to predict the structure.

And you can go through this entire graph
and see that every single case,

the highest prediction, by far,

reflects the actual protein
causing the vibration.

So the method is able to,

by just looking at the picture
of the surface waves,

immediately detect what is the underlying
protein causing these vibrations.

Let’s look at the middle part.

6m17 and 6m18 are
the proteins shown before.

These are the infection stages,

when the molecular interaction begins
between the COVID-19 pathogen

and the human body.

6m17 is the attached state;
6m18 is the detached state.

And even though the structure
is very similar -

there’s only a very slight
molecular change

and very slight change
in the vibrational spectrum,

as you’ve seen on the previous picture -

the method is able
to pick up the differences very well.

The highest probability
in 6m18 is a light blue,

which reflects that particular structure.

So it’s able to predict that.

6m17 is a greenish color
and the same idea.

Highest probability
is for this particular structure.

So the method can not only distinguish
many different classes of proteins -

small, big - but it can also describe

very subtle differences
in vibrational spectra,

very subtle differences
in protein folding states

through these surface waves.

We can use this method to develop
an approach called protein inceptionism.

We can try to see
whether we can find patterns

that are found in
these surface waves in water

generated by the proteins in other images.

Taking of mountain landscape,
maybe taking of lakes,

taking of anything
we can see with our eyes,

we can take a photo and identify
whether we can see

some of those innate features
that are seen in these protein vibrations

impacting on surface waves
also in other systems.

Where and how do we recognize molecular
vibrations in other everyday objects?

We use the DeepDream algorithm to do that

and apply the neural network
we have trained

against all these various
protein vibrations.

You can see a picture here.

This is how the vibrational
spectrum looks like,

embedded, realized
in this water wave surface structure.

If we apply the protein
inceptionism algorithm to that,

it will, in fact, recognize
all these different patterns

which are unique
to this particular protein.

And that’s how the neural network works.

The inner layers determine features
that are unique to that particular protein

and detects which protein
has been creating the vibrations.

We can use that image processing

to see these features
a little more clearly,

and this picture here shows

how the processing of this results
in these spaghetti-like structures,

so those are the unique
fingerprints, or structures,

that are actually causing
these particular resonances

in the neural network.

The resonances in the neural network

generated by the protein
inceptionism algorithm

really is a powerful way of visualizing

how certain features can be magnified

and made more visible and amplified
and resonated in these images.

Just like resonances happen
in musical instruments like a guitar,

here we can see resonances

as an image generated ultimately
by the molecular vibrations.

Now, if we look at another situation
where we have water waves in the river -

this is the original picture -

and these waves are now
not caused by proteins,

these waves, in fact,
are caused by flowing water over rocks,

and you can see how the algorithm picks up
certain features in these water waves,

which, again, do not occur
because of proteins

but have similar features as the ones
seen in protein-caused water waves.

Again, with some processing of the images,

you can see there’s
a certain pattern that emerges.

These are all the areas,
the spaghetti-like structures,

where the algorithm detects resonances
of the inner detailed structures

that are caused
by these protein vibrations.

So protein vibrations
are also seen in rivers.

This is an example of a coastal landscape
where we have three elements.

We have the water, we have rocks,
and we have air.

And in fact, the algorithm detects
these features of protein vibrations

in all three elements -

some of them in the water waves,
which is not surprising,

because both of them are water waves.

We also see some of these ideas
being resembled in rocks.

Some of the features,
some of the patterns in rocks

resemble those seen in the proteins.

And we can also see a few of those
being picked up in the sky.

And again, this is the analysis
using the image processing,

and you can see where in the image

we can pick up the features
that are natural,

that are innate to the protein vibrations.

Matter is sound, and sound is matter.

In fact, we’ve seen that when we think
about the representation of material,

we can think of it
as a collection of vibrations.

We can make it audible.

We can also make the vibrations visible
in other states of matter,

like in liquids, in water,
for instance, as surface waves.

And we can utilize various ways
of manipulating matter,

of creating new materials

by either creating new sound

or using sound

as a way of detecting information
in existing musical compositions.

So you can ask the question:

What kind of material did Beethoven create
by analyzing the compositions he made?

We can also see protein vibrations
or the features of protein vibrations,

the unique signatures of the vibrational
spectrum, in other forms.

Using the protein inceptionism
as an algorithm,

we’ve been able to show
that these vibrations can be seen

not only in water waves
but also in other states of matter.

They can be seen in landscapes.

They can be seen in plants.

They can be seen in the sky and snow
and many other elements.

Thank you so much for your attention.