Ask The Expert: Polymath Perspectives with Ian Ashmore PhD

An interview with Phrasee's new Director of Data Science

The study of language and data can take us down all sorts of riveting rabbit holes. So it is with Phrasee’s new Director of Data Science, Ian Ashmore: from snowboarding, to Jungian archetypes, to metaverses, this interview turned out to be almost as wide-ranging as Ian’s career. (For context: he’s been a photographer, fashion entrepreneur, astrophysicist…and more.) So, what led Ian to the apogee of Phrasee’s data science team? Let’s see where this particular rabbit hole takes us…



After various careers in astrophysics, analytics snowboarding, fashion, journalism and photography (!), what’s drawn you to focusing on language?

So my PhD was ‘messy’. People tend to think physics equations give precise answers, and they do… if you can solve them! In General Relativity for example, the equations are so complex that they’re impossible to solve (on paper) for all but the simplest and most contrived scenarios. My work involved fluid dynamics mixed with electromagnetism, where changes in the field produce changes in the flow and vice-versa. These equations have been around since the 1800s, but until numerical methods were created, it would have taken decades for a human to solve them in the types of situations I modelled (and humans make mistakes with arithmetic).

The advent of computation offered a new type of solution. In pure maths, a proof is a proof, for all time, whereas numerical methods are more like probabilities (e.g. 99 times in 100, something happens). While this makes maths professors squirm, physicists and engineers only need the solution to ‘work’ (we have been making predictions from quantum mechanics for over a hundred years, and QED is the most successful scientific theory of all time, but nobody knows what it really means).

From snowboarding, I learned to trust my intuition, especially when assessing avalanche risk. If something doesn’t feel right, I walk away. I don’t see intuition as something supernatural – I see it as all the neural networks in my subconscious pinging out predictions that I can’t explain in words. Even without an explanation, I know that it is wise to listen to your gut.

Language is similarly messy, with all its nuance and exceptions… but underneath the mess, it is based in logic. As I got into data science, I was always drawn to language problems, since they’re hard – and the harder the problem, the more satisfying the solution. These systems are not quantum though. There is no black box. They’re just extremely complicated.


"As I got into data science, I was always drawn to language problems, since they're hard - and the harder the problem, the more satisfying the solution."

Ian Ashmore, Director of Data Science at Phrasee (PhD. Astrophysics)

Why does data science get you going – and how does it compare to astrophysics?

Data science is perfect for astrophysicists: Like in astronomy, the actual data is usually a given, and your job is in wringing as much (reliable) insight out of it as possible, before demanding a bigger boat (usually costing billions!). We study the data, identify patterns and use them to predict what will happen in the future. This predictive aspect is what gets me going: A pattern is identified, a prediction is made, and if it proves to be correct, we come up with a theory to explain it. Making predictions from complex data that are not obvious, but which turn out to be true, is extremely satisfying!

What is it about Phrasee that made you want to get on board (if you’ll pardon the pun)?

Please don’t apologise, I’m going to ‘run with the pun’… There’s the ‘phreedom’ to mix research with real world application, ‘phantastic’ opportunity to work with talented people from varied backgrounds, (even to learn some ‘phactual’ linguistics) and the chance to ‘phocus’ on the fine details of language modelling. (I could go on!)

It was giving the interview presentation to Neil ( Neil Yager, Phrasee Chief Data Scientist and co-founder) and Trevor (Trevor Beers, Phrasee Data Scientist) that really inspired me. I learned something of the methods employed at Phrasee and saw the opportunity to use my skills, but it was the chance to work with highly competent and interesting colleagues in a sphere where ideas can be quickly converted into products that really sold me.

What’s your view on AI’s ability to generate language? What’s the potential, and what are the limitations?

Until relatively recently this was a pipe dream. It’s one thing to suggest the rest of a sentence (a la Google search bar) and quite another to write a coherent book. With recent advances, it is now possible to encode larger blocks of texts with vector representations that permit a direct comparison of the semantics of each. By combining ‘word level’ data with ‘sentence level’ data, the issue of ‘hallucinations’ (when the model goes ‘off piste’) can now be mitigated. In the future, this is only going to get better, but perhaps there is a limit…

I believe computers to be (potentially) capable of writing a story, but I am yet to be convinced that any algorithm can surpass the great author. There are two types of truth in my opinion: Objective truth, demonstrated by repeatable experiments, and literary truth, which is completely different, but no less true. In Crime and Punishment, Dostoevsky took the essence of why humans never really get away with lying (presumably after digesting thousands of real-world examples) and combined them into a compelling tale. The understanding of morals and the human condition is ‘perception’, and we aren’t even sure what that is.

To me, this represents the difference between Machine Learning (an algorithm that produces more accurate predictions with broader experience) and Artificial Intelligence (an anthropomorphic or human-like algorithm). The issue with the latter is that a third of human interactions are irrational, and it is hard to code these. AI is now split into ‘narrow’ and ‘wide’. Narrow AI includes algorithms that outperform humans in interpreting radiographs, assessing triage in A&E, and (soon) at driving. Wide or ‘General’ AI would be a conscious computer or robot, capable of empathy, love, and malevolence. It’s hard to assess how close we are to that, since we have no consensus on what consciousness actually is. The answer to this could be for 21st century science what quantum mechanics was for the 20th century.

How’s your relationship with technology? Do you see yourself living in the Metaverse in 20 years?

I view technology as a tool. Whenever I visit London, I wonder how I ever managed before I had interactive maps on my smartphone (biro maps drawn on the back of envelopes), and there’s no doubt that delivery platforms saved many food businesses during the pandemic. Social media allows me to keep in touch with friends around the world for free: Even in the 1990s, international calls were impossibly expensive, but now I can video call a friend in Oslo or Seattle for nothing! This is magical and empowering.

That said, tech is a double-edged sword: I’d trust an algorithm to diagnose my MRI scan, deliver my dinner and even choose a date for me, but I would not trust it to distinguish right from wrong. Apps are designed to trigger dopamine release and as humans, we are programmed to chase them. That makes technology addictive, and most people acknowledge a low-level addiction to their phones. For young people, the danger is even more pronounced. I believe that the future with technology can be better, but tech does not come with inbuilt morals, so it’s our job to program them.

It could be that we’re all living in a simulation anyway, in which case none of this matters, but just on the off chance that the Universe really exists, I’m going to keep encouraging my son to play outside!


"The future with technology can be better, but tech does not come with inbuilt morals, so it's our job to program them."

Ian Ashmore, Director of Data Science at Phrasee (PhD. Astrophysics)

You’ve been quoted as saying you “think in calculus and write in Python”. What do you dream in?

Ian: Great question. I think we all dream in symbolic images and archetypes (now called emojis 😉 ). A dream takes things we ‘know’ subconsciously and converts them into things we can articulate in words. Words are representations of feelings and emotions, but they are not the things in themselves. It is tempting to ‘mistake the wood for the trees’ and assume that everyone sees the same ‘red’ or feels the same ‘love’, but even a quick look shows this assumption to be incorrect.

Language is a way for us to compare and discuss our otherwise isolated perceptive awareness and in that way, it marked a step change in evolution, but dreams are pictures in our minds’ eye and give a window into our deepest psychology. Rational materialism has had a good run, but it seems utterly incapable of explaining ‘awareness’. It could be that we just need a bigger computer, but I get the intuition that consciousness is not a computation, and that perhaps it could even be universal…

Crikey – forget rabbit holes…that was more like a quantum leap! Thanks Ian, and keep thinking big…


Looking for more industry expertise?

Check out the rest of our Ask the Expert series.