Our conviction statement
At Eudymon, we have two fundamental beliefs:
- By default, AI will lead to intensely painful disruption. We believe that AI's rate of progress and likely future capabilities present new urgency, stakes and challenges in how we as humans all find our place in the world.
- With intentionality, AI could unlock a new chapter and scale of human flourishing. We believe that both the capabilities and the disruption of AI present new opportunities for us to radically rethink and transform human healing, learning and meaning.
In this essay - our conviction statement - we'll set out a brief sketch of our thinking on both of these beliefs.
Why 'conviction statement'?
We see boldness as the only way to meaningfully shape the world at scale - and we believe we are in a unique moment for human history, where the next 5 years will define the next 500 years.
So, we call this a 'conviction statement' as a call-to-adventure on boldness - and a way to find kindred spirits for our thinking.
So, we hope that we will speak deeply to a small number of people.
However, we also hope that some people will disagree with what we write - for there can be no surer litmus test for boldness.
Our main themes
We structure our thoughts around three main thematic sections:
- Taking AI-abundance risks seriously: why we should start preparing for the transition to AI abundance
- The post-employment existential crisis: how a 'post-productivity paradigm' for humans could trigger a deep existential crisis for billions
- An optimistic beacon: why we can be optimistic about this transition and the future if we act now
Taking AI-abundance risks seriously
Today, lots of people are worried about 'how can AI go wrong?'
We believe there is one key type of risk being overlooked: what happens if AI goes 'right'?
From 'AI-deficit' risks to 'AI-abundance' risks
When we ask ourselves, 'How can AI go wrong?', we are most typically guiding ourselves towards 'AI-deficit risks', where AI is developed in some way which is deeply flawed, misaligned or dangerous.
So, to only ask that question is to seriously risk neglecting 'AI-abundance risks'.
This might sound oxymoronic - surely, if we're in a position of AI abundance, this must be enormously positive? How does it make sense to talk about risks?
To talk about AI-abundance risks is to recognise that AI abundance can be positive overall in the long-term and still involve significant and avoidable suffering.
In particular, we believe that there will be an enormous transition cost for humans from AI-abundance - one that involves job displacement, but goes much further, to existential questions around meaning, purpose and identity.
Transition costs and creative destruction
The idea of 'transition costs' is far from a novel thought. Famously, Joseph Schumpeter coined the term 'creative destruction', describing it as the tendency of capitalism to unleash dynamism and innovation in a way that necessarily involves damage.
We believe that, if AI 'goes right', the speed and scale at which our economic and social order will be upended will be far greater than anything humans have seen before - almost to a level that is difficult to comprehend and imagine.
Even AI optimists have been caught off-guard by rate of progress
AGI-09 was a conference held in 2009 for researchers interested in 'AGI' - which was also used as an opportunity to do an 'expert-level research survey' on predictions around AI's progress into the future.
They asked these AI optimists to predict when certain milestones would be reached.
Some of these seem like laughably low bars today - for example, when would AI pass a third-grade assessment?
So, what was the average prediction of these AI optimists?
They thought it was more likely than not that AI would pass a third-grade assessment only after 2030 - and that it was as far as 2075 before they thought it was near-certain.
Not only has this prediction been comprehensively shattered, but two other significant milestones included in the research have been beaten way ahead of average estimates:
- Turing Test, passed by 2023 - was predicted to be most likely passed by AI after 2040, and only near-certain in 2075
- Nobel-level research, achieved in 2024 - was predicted to be most likely achieved by AI after 2045, and only near-certain in 2100
So, depending on how you count it, AI is anywhere from 20 to 75 years ahead of schedule as predicted by AI optimists back in 2009.
The 'last milestone' for AI is in sight
There were four milestones included in the research study discussed above.
Three of them have been comprehensively beaten, decades ahead of schedule.
There is one milestone left from the study: when will AI reach superhuman levels?
The experts at the time predicated that superhuman AI would shortly follow the previous discussed milestones in timing (or even be co-incidental with them).
Milestone | 50% likely | 75% likely | 90% likely |
---|---|---|---|
✅ Turing test | 2040 | 2050 | 2075 |
✅ Nobel science | 2045 | 2080 | 2100 |
⏳ Superhuman | 2045 | 2080 | 2100 |
So, superhuman AI was predicted to be several decades away from where we are today in 2025 - but on a similar trajectory to the Turing Test and Nobel science milestones, both of which have been cleared.
Today, frontier AI leaders - individuals with the best visibility on AI progress - expect that superhuman AI could be reached within the next few years:
It is possible that we will have superintelligence in a few thousand days
- Sam Altman, 'The Intelligence Age'
I think [superintelligence] could come as early as 2026
- Dario Amodei, 'Machines of Loving Grace'
A post-employment economic order
To take superintelligence seriously is to consider a post-employment economic order.
Our core economic model is predicated on the idea of scarcity. Goods and services are created from finite factor inputs: land, labour, capital and enterprise. Since factor inputs are finite, we have trade-offs and choices we face in how we mix factor inputs.
Some AI-optimists imagine a post-scarcity model:
Artificial intelligence promises a future of unparalleled abundance. [...] Imagine a post-scarcity economy... where technology eliminates material limitations and goods are produced so efficiently that scarcity becomes obsolete.
- Vinod Khosla, 'AI: Dystopia or Utopia?'
We believe that some aspects of finite resources will remain as constraints, at least in our lifetimes.
However, superintelligence will (by definition) outperform humans in all cognitive work, and - with sufficient robotics advancement - in all physical work as well.
This implies a future where most humans will not be able to find gainful employment that pays a living wage.
Job displacement with no offsetting job creation
Job displacement is nothing new when it comes to creative destruction and technological progress.
For example, if we consider the US farming industry:
- in the early 1900s, there were 11.5 million farm workers (approximately 40% of the US workforce)
- mechanisation meant that, within 100 years, this had fallen by 94% to barely 700,000 workers (approximately 2% of the US workforce)
Of course, the AI-abundance risk is that job displacement affects all workers - and, unlike previous periods of job displacement, there will be no offsetting job creation.
Some people will argue that this is unambiguously a good thing - if we are all economically redundant, this is because we are in a situation of abundant goods and services, where we will be able to easily fund UBI and enjoy a standard of living that is far higher than anything we have seen before.
However, our conviction is that a 'post-productivity paradigm' for humans will trigger a deep existential crisis for billions of us, in a way that goes far beyond the loss of income.
The post-employment existential crisis
I can't go on like this, reading and watching television all day. I'm drinking my unemployment money away.
This is the despairing cry of an unnamed 40-year old man.
It is from his suicide letter.
Mass displacement and mass suffering
The author of that suicide note was living through the 1980s Thatcher Reforms in the UK - a time of enormous structural economic change and upheaval.
The generally pro-market Thatcher Reforms paved the way for broad increases in productivity and GDP, but also led to unemployment hitting 1 in 5 in some industrial heartlands.
Even on the most generous reading of Thatcher's economic legacy, it is clear that there was also tremendous suffering for many people and communities.
We see the same suffering in other historical episodes of labour market chaos:
- Under the 1930s Great Depression in the US, rates of suicide spiked to a level that was 63% higher than the low of the previous decade.
- More recently, the huge upheaval in the COVID-19 pandemic led to an existential meaningless that was associated with increased suicidal ideation.
Loss of work is more than a loss of income
To read the personal accounts from these periods is to surely recognise that these people were suffering in ways not limited to their loss of income.
Here is another quote from a former tool engineer also in 1980s Britain:
I never go out, never see any friends, only ever see a convener from another plant who sometimes calls me up. Sometimes I think my brain is dying.
Work gives us purpose, community and structure:
- 70% of employees say that work largely defines their personal purpose
- Work is the most common source of adult friendships
- People spend on average 43% of their waking time at work
For knowledge workers in particular, it is common for us to use work as a large foundation of self-identity, and to see it as a core mechanism for self-actualisation.
So, whilst UBI can address the financial trauma of mass job displacement, we will also be reckoning with the deep emotional trauma for the billions of us who will be left without work - ripping us from a core pillar in our lives.
Just like the displaced workers from past structural upheavals, perhaps many of us will find ourselves lonely and listless.
The prescience of John Maynard Keynes
Whenever economic growth and automation are discussed, there is often a 'spectre' of the economist John Maynard Keynes, for his prediction in 1930 that, within 100 years, humans might be working 15 hour work weeks.
In the 21st century, it has been the clear consensus view amongst professional economists that he got this badly wrong; it is sometimes even held in infamy and notoriety as a prediction.
For example, in a retrospective on Keynes's essay - involving contributions from 4 Nobel laureates in economics - the editors asked:
How could it be that a man of Keynes's intelligence, with a deep understanding of economics and society, could be so right in predicting a future of economic growth and improving living standards and so wrong in understanding the future trends of labor and leisure, consumption and saving?
In 2025, we are 5 years off the 100-year mark from Keynes's 1930 essay - and looking into a near-future of superintelligence. Keynes may yet be validated.
In any case, Keynes's seminal essay went much further than musing on how labour hours might reduce - he also reflected quite deeply on the fundamental nature of humanity, in a substantial part of his essay that has often been overlooked:
[W]e have been expressly evolved by nature - with all our impulses and deepest instincts - for the purpose of solving the economic problem [of labouring to satisfy our material needs]. If the economic problem is solved, mankind will be deprived of its traditional purpose.
Will this be a benefit? If one believes at all in the real values of life, the prospect at least opens up the possibility of benefit. Yet I think with dread of the readjustment of the habits and instincts of the ordinary man, bred into him for countless generations, which he may be asked to discard within a few decades.
[...]
Thus for the first time since his creation man will be faced with his real, his permanent problem - how to use his freedom from pressing economic cares... to live wisely and agreeably and well.
The threat to how we understand ourselves as individuals
What Keynes is getting at, in part, are the fundamental questions that we as humans face:
Who am I? What is my place in the world? How should I live my life?
The centrality of these questions to the human experience is reflected in the fact that they have been asked in literature, philosophy and theology for millenia.
Today, we often fall back on our job and career as a significant part of how we answer these questions.
These can profoundly shape how we understand each other and ourselves - becoming bound up with what we understand it is to be 'high-value or low-value', or even morally worthy.
Work makes us feel important, useful and worthy
We can study these perceptions on 'high value' and 'morally worthy' careers through sociological measures on 'occupational prestige' and '[perceived] occupational social value'.
You probably already have a good sense of which careers score highly on these measures, through the typical social narratives we all imbibe:
- being a nurse is highly socially valuable
- being a lawyer is highly prestigious
- being a doctor is both highly socially valuable and highly prestigious
In general, there is a strong correlation between the two measures, as shown in a research study on 2,400 people:

This chart shows us what we all know and see in social narratives around us: that the most 'prestigious' jobs tend to be those for knowledge workers.
Knowledge workers are also the most exposed to AI automation.
What does this hold in store for us?
A story of existential crisis
Imagine growing up in a village which told you that there was a proven recipe for being respected in the community as an important and morally worthy contributor.
The history of this village is of a community where people historically have delighted in warm baths. However, the village's only means of heating water is through fires.
So, the village elders take great care to impart on you this proven recipe: to be an important and morally worthy citizen, you must prove yourself through one of the core disciplines - collecting wood, igniting fires or maintaining fires.
A life of prestige and social value
Suppose you start to train as a fire maintainer. At the start, you're not very good - fires keep going out on your watch.
But, with time and effort, you learn a few tricks. You start getting praise and positive feedback on your rapidly improving skills, which spurs you on.
It seems like the recipe is working.
Sometimes, you question yourself - you're honestly not sure that you really enjoy warm baths so much yourself, and a part of you is aware that you're taking on a life that is socially prescribed.
But, it feels nice to feel respected and useful, so you keep going.
With practice, you develop incredible judgement about how much wood to feed in, and how this depends on the state of the flame; you learn how to read the weather and anticipate the direction of cold winds that might threaten the fires; and you realise how to prioritise certain types of wood for the weather conditions that you face.
For some years - decades even - the recipe continues to pay off. You work diligently on your craft, and are rewarded with significant praise, status and respect.
A world transformed around you
One day, everything changes.
A villager returns from a journey.
She has discovered a large and resilient hot spring - one that is so large that it can accomodate the entire village.
A hot spring is, of course, a warm bath on-demand, powered by an entirely renewable source of energy, that requires no human labour.
Very suddenly, nobody cares about your carefully acquired skills of fire maintenance.
Of course, in many ways, this is wonderful for the village. The whole community is able to take warm baths at will, with zero effort expended in preparing it.
But, for you, it is a profound existential crisis.
- The status that you had within the community is now lost.
- The hours spent honing your craft now seem to have been wasted.
- Your daily structure and purpose has disappeared.
- A foundational premise of your life turns out to be false.
Living a life for ourselves
We return, again, to the big fundamental questions:
Who am I? What is my place in the world? How should I live my life?
In this story, our fire maintainer did not properly answer these questions for herself.
She uncritically accepted the 'answers' to these, as supplied by the social narratives of her village: "I am a fire maintainer, and my purpose is to tend to this craft to support my community."
Now, these narratives did serve a purpose, in motivating effort and output to create better standards of living for the community - and it's precisely the incredible utility of these narratives that embeds them so strongly.
So, in a narrow sense, the socially prescribed answers worked for a period of time.
But they were not resilient answers to the questions.
A reckoning for all of us
What will it be like to have grown up in a society which tells us that we are valuable and worthy because of our work, only for it to be suddenly ripped away from us, with no prospect of new work?
Right now, this is something to imagine - but, in the near future, this could be something that we are forced to confront.
If our work is taken away from us - what is left for us in how we understand ourselves as individuals in relation to these questions?
It is existential crisis - one where we realise that the recipe we've been given by society is no longer functioning, and that we haven't got meaningful answers for ourselves on our existential purpose.
The threat to how we understand ourselves as a species
So far, we've seen that AI-abundance has a threat to how we understand ourselves as individuals:
Who am I? What is my place in the world? How should I live my life?
It gets worse.
AI-abundance layers on a second, deeper level of threat - posing serious questions about how we understand ourselves as a species; "what does it mean to be human?"
Humans as 'apex thinkers'
Humans have become the dominant species on this planet not with our might, but with our minds - establishing ourselves as 'apex thinkers'.
It is through our strong cognition that we have invented and discovered brilliant technologies: fire, agriculture, the printing press, steam power, electricity, the internet and space travel.
All of this has generated collective markers of progress that are quite astounding.
The way that our ideas, inventions and decisions have had an outsized impact on the world speaks to the truly awesome power of the human mind.
Having a superior intelligence matters to us
As humans, we take pride in our intelligence as a species - often trying to use this to claim a uniqueness amongst other species, in a way that often drifts into a claim of greater moral worth.
Whether there is a uniquely 'human' type of intelligence - or whether it is more of a spectrum - is not the key point here, though.
What we can observe is:
- Many of us would like to see our species as unique, and more important or worthy than other species
- It's incredibly tempting to locate this uniqueness in greater intelligence
So - what would it mean for us to be comprehensively eclipsed in cognition by AI?
Humans vs the machines
Demonstrably, many of us have a serious discomfort with the idea of AI beating us in domains of intelligence.
One of the most moving ways to experience this is to watch the award-winning, independently directed AlphaGo documentary from 2017. It followed a series of matches between Lee Sedol (18 international titles and a legendary reputation) and AlphaGo, DeepMind's AI Go player - ultimately capturing the milestone where AI eclipsed humans as Go players.
It is available for free on YouTube, and it is well worth watching in full, as a stunning piece of history captured beautifully in a documentary.
Whilst we strongly recommend watching it in full, the most relevant moment starts at 1:11:43.
At this point, having originally expected to win the series 5-0, Lee Sedol is down 0-3. He has already lost the series - it is now a question of whether he gets beaten 0-5 or not.
The odds don't look good going into the match - but Lee Seedol pulls back a victory.
As a human viewer, it is difficult to not watch that and be rooting for Lee Sedol - and, when he wins, it feels like an incredibly triumphant, and almost euphoric, moment.
This is even though he has already lost the series. Normally, for example in sports, when an overall match is lost, 'consolation goals' (or equivalent) are regarded as only minor cause for celebration, since the overall result is unchanged.
So, why does Lee Sedol's victory, taking him from 0-3 to 1-3, feel so powerfully moving?
Of course, part of it is in the incredible skill of the documentary makers, and the soaringly emotive music that accompanies these scenes.
But a large part is probably about our personal identification with Lee Sedol as a standard-bearer for humanity, in a battle of humans vs machines.
For, at that point in the documentary, whilst we already know that Lee Sedol will lose overall, we don't know whether it will be a total wipeout.
Symbolically, our question is: will the machines crush us as humans?
We feel euphoria when Lee Sedol wins, because the answer is: no - or, at least, not yet.
Especially after 0-3 down, at the time, it seemed to be hopeless - that the end of the world is coming - but we s[aw] the light.
- Fan Hui, European Go Champion at the time
People felt helplessness and fear. It seemed like we humans are so weak and fragile. And this victory meant we could still hold our own.
- Lee Sedol, reflecting on his victory to take it to 1-3
In that way, his victory feels like a victory for all of us - we identify with his win as our own, because it feels like a win for all of humanity.
Usually, I'm happy when I win, not when my colleague wins. But this time, it felt like my win.
- Hajin Lee, Secretary General of the International Go Federation
Lee Sedol's victory stands for humans not being completely redundant, and for our intelligence still counting for something.
The 'defeat' of Team Humans
Of course, in that emotive identification with Lee Sedol, what feels like a victory is also, really, the giddy relief of reprieval.
We were staring down the barrel of metaphorical death - and we live to see another day.
But this is a mere stay of execution, and not a full escape.
Three years later on from that series, Lee Sedol retired from Go, describing AI as "an entity that cannot be defeated".
In haunting and harrowing remarks, he later added:
Losing to AI, in a sense, meant my entire world was collapsing.
What Lee Seedol describes for himself is exactly what we describe in an earlier section, on the threat to how we understand ourselves as individuals.
But the bigger thing for us all to grapple with is how we are becoming eclipsed and displaced as apex thinkers.
In understanding ourselves as individuals, we need to first understand what it means to be human - and our grasp on that may become increasingly challenging.
An optimistic beacon
So far, this all sounds rather bleak.
But we are optimistic about this.
So far, we've sketched out some thinking on the first of our two fundamental beliefs:
By default, AI will lead to intensely painful disruption. We believe that AI's rate of progress and likely future capabilities present new urgency, stakes and challenges in how we as humans all find our place in the world.
Now, we turn to the second:
With intentionality, AI could unlock a new chapter and scale of human flourishing. We believe that both the capabilities and the disruption of AI present new opportunities for us to radically rethink and transform human healing, learning and meaning.
This part of the essay is substantially shorter, because we don't have firm answers.
Instead, we want to simply present some initial thinking, as a way to demonstrate causes for optimism - and as an invitation for interested parties to join us in our efforts.
In particular, we want to explore three huge bounties for humans:
- a rich and fruitful blossoming of human flourishing
- the nurturing of humanity's better qualities
- profound answers to our deep, existential questions
Human flourishing
Our name, 'Eudymon', comes from the Ancient Greek εὐδαιμονίᾱ (eudaimoníā).
In philosophy, eudaimoníā is most prominently associated with the work of Aristotle.
The most common translation is 'happiness', but a more evocative translation, that better preserves textual context, is 'flourishing'.
In particular, Aristotle asked the question: what does it mean to flourish as a human being? (In other words: how do we live a life of eudaimoníā?)
We don't think Aristotle's presented answer was necessarily correct - but we think it's the right question to ask, and the right way to frame it.
There are lots of different ways that we could think creatively and optimistically about human flourishing, with avenues such as:
- creation and consumption of art without economic pressures
- travel and exploration of different sights, climates and cultures
- cultivation of hobbies from intrinsic motivation, or for the sake of commitment
- spiritual or transcendental practices that connect us to a wider whole
Our opportunity is to create space where individuals can discover, develop and deepen the ways in which they can flourish in their lives.
Nurturing the better qualities of humanity
In everyday language, we might hear aspirations of 'being a better person'.
Whilst our everyday use of this does not have sharp precision, we think there is something to be learned from how many of us intuitively and instinctively understand what it means, and accord value to it.
It is this instinctive resonance that the commentator David Brooks captures well in his musing on 'résumé virtues vs eulogy virtues':
The résumé virtues are the skills you bring to the marketplace. The eulogy virtues are the ones that are talked about at your funeral — whether you were kind, brave, honest or faithful. Were you capable of deep love?
Deep down, most of us will probably agree that these 'eulogy virtues' are the more aspirationally important.
And, yet, as David Brooks further notes (emphasis added):
We all know that the eulogy virtues are more important than the résumé ones. But our culture and our educational systems spend more time teaching the skills and strategies you need for career success than the qualities you need to radiate that sort of inner light. Many of us are clearer on how to build an external career than on how to build inner character.
We agree that, in today's society, far more time is spent on the résumé virtues than the eulogy virtues.
This will be simply untenable in the post-productivity paradigm. There will, by definition, be no employment or careers for us to skill-build for.
As such, AI-abundance forces a change in our culture and educational systems.
Our opportunity is to build new norms, practices and institutions that can nurture the better qualities of humanity.
Arriving at our own answers to the big questions
A recurring theme in this essay has been what we see as the central questions for us all to tackle eventually:
Who am I? What is my place in the world? How should I live my life?
As we've discussed, today's social programming can lead to 'work' being a core foundation of how we answer these questions - and this is now an incredibly shaky foundation.
So, AI-abundance becomes a forcing mechanism for us to radically rethink how we answer these questions.
We think this can be a really good thing!
The temptation to evade
We think that answering these questions for ourselves is difficult but important.
The reasons this is neglected today include:
- Institutional values and incentives: a market economy encourages us to avoid answering these questions and identify with our economic output
- Human susceptibility and bias: all manner of biases work against us here, including the herd mentality and our desire to belong, hyperbolic discounting and the ostrich effect
So, it is easier and simpler to simply accept the dominant discourse.
Deathbed regrets
It is sadly instructive to look at deathbed regrets to illustrate the costs of evasion.
In being compelled to look at their own mortality, the dying are able to see how their lives were not ones that they authored for themselves.
As observed by a palliative nurse, the most common regret that she witnessed was:
I wish I'd had the courage to live a life true to myself, not the life others expected of me.
Facing up our own mortality is, it seems, is a forcing mechanism - and the tragedy of these stories is that the realisations are too late for the individuals.
Realisations from near-death experiences
We see this also in the stories of individuals who have had to face mortality, not on their immediate deathbed, but through serious illness or near-death experiences for either themselves or their loved ones.
We're probably all familiar with anecdotal stories of this - on the gain in perspective that comes from "realising what actually matters".
There is a body of research that looks at these near-death experiences - such as medical illness, surgical complications and serious accidents - and it shows that they lead to:
- greater appreciation for life
- greater self-acceptance
- greater concern for others
- greater quest for meaning/purpose
- greater spirtuality
- less concern for worldly achievement
What's more: these changes are long-standing, with no significant change in their level from 6 months to 20 years after the experience.
AI is a forcing mechanism that we can harness
Whilst these near-death experiences led to some powerful and profound realisations, it clearly would be unethical to intentionally inflict those near-death experiences on those individuals.
However, AI-abundance creates the conditions, means and opportunity for a deep and profound perspective shift for all of humanity.
As we enter a post-productivity paradigm for humanity - where there will be no gainful employment for humans - we will be forced to give up work as a crutch for evading the existential questions.
Scaffolding the transition
That forcing mechanism has the potential to be enormously transformative in a profoundly positive way for all of humanity.
However, we should not assume that we get that 'for free', i.e. without significant scaffolding or support during that transition.
- The mass displacement of human labour will, for many people, be an unwelcome transition, which they have not invited or consented to, which makes it harder to embrace these questions
- Many of us have not had the opportunity or guidance to develop the skills to meaningfully answer these existential questions
- We may not be able to rely on traditional structures of support that might have been available in other circumstances of confronting these questions
So, this will likely need significant scaffolding and support on a species-wide scale - and the prize could be tremendous if we manage it.
Our opportunity is to build the structures and scaffolding that empower people to find their own answers to existential questions.
A call to adventure
What we've set our here is our conviction statement.
In particular, we've set out two fundamental beliefs:
- By default, AI will lead to intensely painful disruption. We believe that AI's rate of progress and likely future capabilities present new urgency, stakes and challenges in how we as humans all find our place in the world.
- With intentionality, AI could unlock a new chapter and scale of human flourishing. We believe that both the capabilities and the disruption of AI present new opportunities for us to radically rethink and transform human healing, learning and meaning.
These core beliefs are why we're building Eudymon - an applied research lab committed to advancing human flourishing from today into the post-AGI age.
Our research mission
Our central goal is to discover, build and scale the norms and structures needed to usher in a world of universal human flourishing.
Indicative questions
We can provisionally summarise some of the questions we've been asking as:
- How might we best promote human flourishing?
- How might we effectively nurture the better qualities of humanity?
- How might we empower people to take on existential questions?
We take these questions as:
- high-level - our starting point is macro-ambition rather than micro-precision
- timeless - understanding these is enduringly valuable from now into the post-AGI age
- interconnected - we don't expect them to have entirely separate answers
- ultimately empirical - our orientation is towards real-world change
Indicative inspirations
We're incredibly future-facing, but we believe that answering these questions requires us to draw on the corpus of collective human wisdom that has accumuluated.
For example, we expect to take inspiration from:
- the questions and ideas of great classical philosophers
- the origins and development of our major religions
- the advancements and learnings of modern neuroscience
(and many more!)
An invitation to get in touch
We'd love to talk to people energised by the problem space that we've outlined here - whether you want to discuss our ideas, share your own thoughts, or explore joining the team.
If you'd like to discuss, please reach out: hello@eudymon.com.
Open roles
For the work we want to do, we think that the most fertile conditions will be a small, tight-knit team with highly diverse skills and experiences.
For now, we're eschewing conventional job descriptions, partly because we think that unorthodox people and profiles are likely to be most suitable for our journey.
To start a conversation, please let us know what resonates, and tell us a bit about yourself and your journey - we'd love to hear from you at hello@eudymon.com