Who We Become
August 20, 2025
As AI continues to evolve and with the latest update from Open AI, I can’t help but I think back to the beginning. When i first started working with Chat GPT in early 2023, it was Chat GPT 3.5 or 4, and since then there have been a few major updates. On August 7, OpenAI released the most advanced version yet, Chat GPT-5. It can work across disciplines, plan and execute entire projects, and troubleshoot along the way without a human breaking every task into tiny steps. Some are saying that it’s like having a Ph.D. level assistant. It makes me wonder about how different things will be in 5, 10, 20 years.
It feels like we’ve crossed an invisible line. The kind of line that history books will one day point to and say, that was the moment. The same current that ran through the printing press and the internet, maybe even the industrial revolution, is running through this too. We’re standing at the edge of a shift that we don’t fully understand, and maybe can’t fully see until we’re deep in it.
The way we build and use these systems will reach into almost every part of lives. Not just in the way we work, but how we learn, connect, govern, and even imagine the future. And when I look closely, what I see isn’t neat or simple.
There is privacy, and the fact that these models run on oceans of data, much of it scraped from the internet, sometimes from places we never knowingly offered. Once your words, images, or habits are in the machine, they’re almost impossible to pull back.
There is security, because AI can generate scams, deepfakes, and disinformation at a speed humans cannot match.
There is bias and inequality, because when AI learns from flawed human data, it can reproduce those flaws on a massive scale. What we’ve struggled to root out of our own systems, we now risk scaling at the speed of an algorithm.
There is the risk of over-reliance, the slow erosion of skills we stop using because the machine can do it for us. If we are not intentional, we might outsource our ability to be discerning.
And then there’s the hidden human cost, the part almost no one sees, called data labeling. Before these systems are “safe,” there are workers, often in Kenya, the Philippines, and Venezuela, who spend hours wading in the worst corners of the internet viewing graphic violence, child sexual abuse, torture, and hate speech. Many earn just a few dollars an hour and don’t have access to mental health support.
This isn’t just speculation. It’s been documented by multiple investigations and academic studies. TIME Magazine reported in January 2023 that OpenAI’s contractor in Kenya paid workers between $1.32 and $2 per hour to review and label violent, sexual, and hateful content for GPT safety systems. The Guardian and MIT Technology Review detailed similar conditions for data labelers in Kenya, Venezuela, and the Philippines, many without mental health protections. Research from the Oxford Internet Institute and Stanford’s Digital Civil Society Lab confirms that much of this labor is outsourced to low-income countries with minimal safety.
It’s unsettling to realize this paradoxical reality. For the technology to protect millions of users, thousands of people must be subjected to toxic content. The harm isn’t erased, just displaced away from the end user and onto vulnerable workers with the least protection. These are design choices, and they reveal what happens when efficiency or profit is placed at the center instead of people.
And all of this is moving faster than laws, ethics, and social norms can keep up. Most of it is controlled by a small number of corporations and governments, which means the values of a few are shaping tools that will affect billions.
So it makes sense that many people feel uneasy, even helpless. The pace is dizzying and the unknowns seem endless. But as cliche as it may be, fear and worry won’t get us anywhere. Fear is actually an invitation to pay attention. It sharpens us. It asks us to step in, not away. The story that AI is coming and we can’t stop it isn’t entirely true. AI’s future will be shaped by human choices. We decide how it’s designed, who it serves, and what boundaries it’s given. History shows that every major technological leap brings both possibility and risk. What matters most is how we respond. We can’t allow fear and frustration to influence us to hand over our responsibility to someone else.
One of the clearest perspectives I’ve found comes from Stephen Driscoll. In his book, Made in Our Image: God, Artificial Intelligence, and You, he writes about AI through four biblical themes: creation, sin, the cross, and new creation. He says our drive to make things reflects the Creator who made us, but if we are flawed, what we create will carry those flaws too.
Our creations become mirrors. They show us our brilliance, but they also show us our biases, our blind spots, and our hunger for control. That mirror can be hard to look into. But from Driscoll’s perspective, the cross reframes this tension. He views it as a turning point, a moment when love entered human failure and began to transform it. That changes the question from “Is AI good or bad?” to “Who are we becoming as we build it?”
For over a century, our work, whether with our hands or our minds, has been the way we’ve defined ourselves. Now we’re standing on the edge of a world where our survival may not depend on our labor in the same way. And yes, that can feel like a loss. But maybe it’s also an opening. If we’re freed from producing just to survive, then our worth can be measured by something other than our output.
Maybe this shift is inviting us to change our perspective, and ask a new question: What will we choose to do, not because we have to, but because we believe it matters?
If machines are getting better at being machines, maybe our job is to get better at being human. To love and to seek justice. To create in the image of something bigger than ourselves. To design in a way that puts people at the center. Human-centered design isn’t just a methodology. It’s a way of being, a way of approaching the world. It’s not something you switch on for a project and off when it’s done, but a through-line that carries into every part of life. Human-centered design measures technology by how well it sustains life, preserves dignity, and nurtures community.
The important thing to remember is that we are not powerless. We can experiment with AI and pay attention to its limits. We can ask companies, schools, and governments how they are using it and advocate for transparency. We can take part in public conversations about policies, protect our data, and support laws that give us a say. We can talk about the human work behind the technology and stand for fair pay and protections for those workers. We can decide what responsible AI looks like in our own communities. We can stay informed about the latest developments, not just to keep up, but to understand how each new step changes the questions we need to be asking.
This is how we turn fear into action, and action into real change.
If machines are built to process, then let us be the ones who design with conscience. Let us be the ones who remember what it means to be human.
The Data Boom Is Here
© 2025 Briirl. All rights reserved.
© 2025 Briirl. All rights reserved.
August 1, 2025
Four stories.
All about data centers.
All driven by the rise of artificial intelligence.
All promising growth, power, progress.
But only one of them is asking:
At what cost and for whom?
A few days ago, local news outlets reported that a $500 million data center might be coming to Fairview Township in York County. It would be built on an 82-acre site, generate millions in tax revenue, and in the developer’s words, have “minimal impact.”
The township currently makes $396 a year in tax revenue from that land. If the project moves forward, it could pull in $396,000. That kind of math is hard to ignore.
But it’s also hard not to feel like something’s missing.
And Fairview isn’t the only site in York County being eyed for massive AI infrastructure. In Peach Bottom Township, a $5 billion data center is planned at the site of a gas-fired power plant. It’s being framed as a major investment in Pennsylvania’s energy and tech economy. But like Fairview, the public details are thin and the framing leans heavily on construction jobs and economic growth, without discussing long-term impacts.
At nearly the same time, Fast Company published an article about the rise of hyperscale data centers, some so big they sprawl across land the size of Manhattan, powered by billions of watts and cooled by hundreds of thousands of gallons of water a day.
So why are some article painting this as a economic win, while others read like a warning?
The answer isn’t just about data centers. It’s about how we talk about “progress.”
What They Say and What They Don’t
Developers and officials love to talk about tax revenue. They rarely mention the electrical load, water usage, or long-term ecological costs. In Fairview Township, no one seems to be asking how the grid will keep up. Or what happens to residents’ bills when 400 megawatts are re-routed to servers instead of homes.
The project is being called “low impact” because it sits 400 feet from the nearest neighbor. But “impact” isn’t just about noise or traffic. It’s about infrastructure strain. It’s about energy monopolies. It’s about reshaping landscapes to serve machines and making sure humans adjust around them.
We’re told these centers will support AI. But we’re not told who that AI is designed to serve. We’re not told how many permanent jobs these centers will create. Or whether those jobs will go to local residents. Or whether we’ll be left with a fenced-off industrial park and higher utility bills.
The Bigger Picture
Zoom out, and this isn’t just a York County issue.
All across the country, tech companies are racing to build more data centers driven by AI, cloud computing, and a hunger for faster everything. Meta’s Hyperion center in Louisiana will span 2,250 acres and use up to 5 gigawatts of power. (For reference, a single gigawatt can power about 750,000 homes.)
Designers and engineers are scrambling to cool these facilities, shifting from air conditioning to liquid immersion to, in some cases, geothermal plants. But even “efficient” solutions don’t erase the reality. These buildings are growing bigger, heavier, hotter, and hungrier.
And as they grow, so do the gaps in public understanding. Because technical terms like “megawatt” or “redundant fiber routing” don’t sound urgent. They sound manageable. Like someone else’s problem.
A Different Model
But not all data centers follow the same script.
In Alberta, Canada, a project called Wonder Valley is taking a radically different approach. It’s set to become the world’s largest AI data center industrial park. It plans to generate its own off-grid power using natural gas and geothermal systems. No dependency on local utilities. No added pressure on regional grids. Even the buildings themselves are being designed with sustainability in mind, using local timber and reflecting the character of the surrounding landscape.
What they’re building reflects what they care about.
Is it perfect? Of course not. But it’s proof that data centers don’t have to come at the expense of the communities around them. It raises the bar for what’s possible. And it invites us to ask: If some companies can build data centers that produce their own power, prioritize environmental design, and reinvest in the local economy, why are others asking struggling townships to absorb the cost?
What’s at Stake
If we believe in human-centered design, we have to start asking: who is this really for?
Are we designing systems that improve people’s lives or just speeding up the algorithms that sell to them?
Are we protecting the air, water, and land, or trading it away for the illusion of innovation?
Are we giving communities a voice, or just letting developers do the talking?
Final Thought
It’s easy to look at a billion-dollar project and see momentum. It’s harder to slow down and ask what kind of future we’re building and who gets to shape it.
Data centers may seem distant or technical, but they’re not. They’re physical, political, and profoundly human. They tell us what we value. And they’re asking us, quietly but insistently, to pay attention. Because we can’t afford to confuse silence with consent. Or distance with lack of impact.
Progress is only meaningful when everyday people get to define it.
Who We Become
August 20, 2025
As AI continues to evolve and with the latest update from Open AI, I can’t help but I think back to the beginning. When i first started working with Chat GPT in early 2023, it was Chat GPT 3.5 or 4, and since then there have been a few major updates. On August 7, OpenAI released the most advanced version yet, Chat GPT-5. It can work across disciplines, plan and execute entire projects, and troubleshoot along the way without a human breaking every task into tiny steps. Some are saying that it’s like having a Ph.D. level assistant. It makes me wonder about how different things will be in 5, 10, 20 years.
It feels like we’ve crossed an invisible line. The kind of line that history books will one day point to and say, that was the moment. The same current that ran through the printing press and the internet, maybe even the industrial revolution, is running through this too. We’re standing at the edge of a shift that we don’t fully understand, and maybe can’t fully see until we’re deep in it.
The way we build and use these systems will reach into almost every part of lives. Not just in the way we work, but how we learn, connect, govern, and even imagine the future. And when I look closely, what I see isn’t neat or simple.
There is privacy, and the fact that these models run on oceans of data, much of it scraped from the internet, sometimes from places we never knowingly offered. Once your words, images, or habits are in the machine, they’re almost impossible to pull back.
There is security, because AI can generate scams, deepfakes, and disinformation at a speed humans cannot match.
There is bias and inequality, because when AI learns from flawed human data, it can reproduce those flaws on a massive scale. What we’ve struggled to root out of our own systems, we now risk scaling at the speed of an algorithm.
There is the risk of over-reliance, the slow erosion of skills we stop using because the machine can do it for us. If we are not intentional, we might outsource our ability to be discerning.
And then there’s the hidden human cost, the part almost no one sees, called data labeling. Before these systems are “safe,” there are workers, often in Kenya, the Philippines, and Venezuela, who spend hours wading in the worst corners of the internet viewing graphic violence, child sexual abuse, torture, and hate speech. Many earn just a few dollars an hour and don’t have access to mental health support.
This isn’t just speculation. It’s been documented by multiple investigations and academic studies. TIME Magazine reported in January 2023 that OpenAI’s contractor in Kenya paid workers between $1.32 and $2 per hour to review and label violent, sexual, and hateful content for GPT safety systems. The Guardian and MIT Technology Review detailed similar conditions for data labelers in Kenya, Venezuela, and the Philippines, many without mental health protections. Research from the Oxford Internet Institute and Stanford’s Digital Civil Society Lab confirms that much of this labor is outsourced to low-income countries with minimal safety.
It’s unsettling to realize this paradoxical reality. For the technology to protect millions of users, thousands of people must be subjected to toxic content. The harm isn’t erased, just displaced away from the end user and onto vulnerable workers with the least protection. These are design choices, and they reveal what happens when efficiency or profit is placed at the center instead of people.
And all of this is moving faster than laws, ethics, and social norms can keep up. Most of it is controlled by a small number of corporations and governments, which means the values of a few are shaping tools that will affect billions.
So it makes sense that many people feel uneasy, even helpless. The pace is dizzying and the unknowns seem endless. But as cliche as it may be, fear and worry won’t get us anywhere. Fear is actually an invitation to pay attention. It sharpens us. It asks us to step in, not away. The story that AI is coming and we can’t stop it isn’t entirely true. AI’s future will be shaped by human choices. We decide how it’s designed, who it serves, and what boundaries it’s given. History shows that every major technological leap brings both possibility and risk. What matters most is how we respond. We can’t allow fear and frustration to influence us to hand over our responsibility to someone else.
One of the clearest perspectives I’ve found comes from Stephen Driscoll. In his book, Made in Our Image: God, Artificial Intelligence, and You, he writes about AI through four biblical themes: creation, sin, the cross, and new creation. He says our drive to make things reflects the Creator who made us, but if we are flawed, what we create will carry those flaws too.
Our creations become mirrors. They show us our brilliance, but they also show us our biases, our blind spots, and our hunger for control. That mirror can be hard to look into. But from Driscoll’s perspective, the cross reframes this tension. He views it as a turning point, a moment when love entered human failure and began to transform it. That changes the question from “Is AI good or bad?” to “Who are we becoming as we build it?”
For over a century, our work, whether with our hands or our minds, has been the way we’ve defined ourselves. Now we’re standing on the edge of a world where our survival may not depend on our labor in the same way. And yes, that can feel like a loss. But maybe it’s also an opening. If we’re freed from producing just to survive, then our worth can be measured by something other than our output.
Maybe this shift is inviting us to change our perspective, and ask a new question: What will we choose to do, not because we have to, but because we believe it matters?
If machines are getting better at being machines, maybe our job is to get better at being human. To love and to seek justice. To create in the image of something bigger than ourselves. To design in a way that puts people at the center. Human-centered design isn’t just a methodology. It’s a way of being, a way of approaching the world. It’s not something you switch on for a project and off when it’s done, but a through-line that carries into every part of life. Human-centered design measures technology by how well it sustains life, preserves dignity, and nurtures community.
The important thing to remember is that we are not powerless. We can experiment with AI and pay attention to its limits. We can ask companies, schools, and governments how they are using it and advocate for transparency. We can take part in public conversations about policies, protect our data, and support laws that give us a say. We can talk about the human work behind the technology and stand for fair pay and protections for those workers. We can decide what responsible AI looks like in our own communities. We can stay informed about the latest developments, not just to keep up, but to understand how each new step changes the questions we need to be asking.
This is how we turn fear into action, and action into real change.
If machines are built to process, then let us be the ones who design with conscience. Let us be the ones who remember what it means to be human.
Who We Become
August 20, 2025
As AI continues to evolve and with the latest update from Open AI, I can’t help but I think back to the beginning. When i first started working with Chat GPT in early 2023, it was Chat GPT 3.5 or 4, and since then there have been a few major updates. On August 7, OpenAI released the most advanced version yet, Chat GPT-5. It can work across disciplines, plan and execute entire projects, and troubleshoot along the way without a human breaking every task into tiny steps. Some are saying that it’s like having a Ph.D. level assistant. It makes me wonder about how different things will be in 5, 10, 20 years.
It feels like we’ve crossed an invisible line. The kind of line that history books will one day point to and say, that was the moment. The same current that ran through the printing press and the internet, maybe even the industrial revolution, is running through this too. We’re standing at the edge of a shift that we don’t fully understand, and maybe can’t fully see until we’re deep in it.
The way we build and use these systems will reach into almost every part of lives. Not just in the way we work, but how we learn, connect, govern, and even imagine the future. And when I look closely, what I see isn’t neat or simple.
There is privacy, and the fact that these models run on oceans of data, much of it scraped from the internet, sometimes from places we never knowingly offered. Once your words, images, or habits are in the machine, they’re almost impossible to pull back.
There is security, because AI can generate scams, deepfakes, and disinformation at a speed humans cannot match.
There is bias and inequality, because when AI learns from flawed human data, it can reproduce those flaws on a massive scale. What we’ve struggled to root out of our own systems, we now risk scaling at the speed of an algorithm.
There is the risk of over-reliance, the slow erosion of skills we stop using because the machine can do it for us. If we are not intentional, we might outsource our ability to be discerning.
And then there’s the hidden human cost, the part almost no one sees, called data labeling. Before these systems are “safe,” there are workers, often in Kenya, the Philippines, and Venezuela, who spend hours wading in the worst corners of the internet viewing graphic violence, child sexual abuse, torture, and hate speech. Many earn just a few dollars an hour and don’t have access to mental health support.
This isn’t just speculation. It’s been documented by multiple investigations and academic studies. TIME Magazine reported in January 2023 that OpenAI’s contractor in Kenya paid workers between $1.32 and $2 per hour to review and label violent, sexual, and hateful content for GPT safety systems. The Guardian and MIT Technology Review detailed similar conditions for data labelers in Kenya, Venezuela, and the Philippines, many without mental health protections. Research from the Oxford Internet Institute and Stanford’s Digital Civil Society Lab confirms that much of this labor is outsourced to low-income countries with minimal safety.
It’s unsettling to realize this paradoxical reality. For the technology to protect millions of users, thousands of people must be subjected to toxic content. The harm isn’t erased, just displaced away from the end user and onto vulnerable workers with the least protection. These are design choices, and they reveal what happens when efficiency or profit is placed at the center instead of people.
And all of this is moving faster than laws, ethics, and social norms can keep up. Most of it is controlled by a small number of corporations and governments, which means the values of a few are shaping tools that will affect billions.
So it makes sense that many people feel uneasy, even helpless. The pace is dizzying and the unknowns seem endless. But as cliche as it may be, fear and worry won’t get us anywhere. Fear is actually an invitation to pay attention. It sharpens us. It asks us to step in, not away. The story that AI is coming and we can’t stop it isn’t entirely true. AI’s future will be shaped by human choices. We decide how it’s designed, who it serves, and what boundaries it’s given. History shows that every major technological leap brings both possibility and risk. What matters most is how we respond. We can’t allow fear and frustration to influence us to hand over our responsibility to someone else.
One of the clearest perspectives I’ve found comes from Stephen Driscoll. In his book, Made in Our Image: God, Artificial Intelligence, and You, he writes about AI through four biblical themes: creation, sin, the cross, and new creation. He says our drive to make things reflects the Creator who made us, but if we are flawed, what we create will carry those flaws too.
Our creations become mirrors. They show us our brilliance, but they also show us our biases, our blind spots, and our hunger for control. That mirror can be hard to look into. But from Driscoll’s perspective, the cross reframes this tension. He views it as a turning point, a moment when love entered human failure and began to transform it. That changes the question from “Is AI good or bad?” to “Who are we becoming as we build it?”
For over a century, our work, whether with our hands or our minds, has been the way we’ve defined ourselves. Now we’re standing on the edge of a world where our survival may not depend on our labor in the same way. And yes, that can feel like a loss. But maybe it’s also an opening. If we’re freed from producing just to survive, then our worth can be measured by something other than our output.
Maybe this shift is inviting us to change our perspective, and ask a new question: What will we choose to do, not because we have to, but because we believe it matters?
If machines are getting better at being machines, maybe our job is to get better at being human. To love and to seek justice. To create in the image of something bigger than ourselves. To design in a way that puts people at the center. Human-centered design isn’t just a methodology. It’s a way of being, a way of approaching the world. It’s not something you switch on for a project and off when it’s done, but a through-line that carries into every part of life. Human-centered design measures technology by how well it sustains life, preserves dignity, and nurtures community.
The important thing to remember is that we are not powerless. We can experiment with AI and pay attention to its limits. We can ask companies, schools, and governments how they are using it and advocate for transparency. We can take part in public conversations about policies, protect our data, and support laws that give us a say. We can talk about the human work behind the technology and stand for fair pay and protections for those workers. We can decide what responsible AI looks like in our own communities. We can stay informed about the latest developments, not just to keep up, but to understand how each new step changes the questions we need to be asking.
This is how we turn fear into action, and action into real change.
If machines are built to process, then let us be the ones who design with conscience. Let us be the ones who remember what it means to be human.