Is it OK to kick a robot dog?


Boston Dynamics created a robot dog named Spot that can dance


Is it okay to kick a robot dog?

It’s a funny question on the surface. It’s not alive. It doesn’t feel pain. So… who cares?

Lately I’ve been reading a book on AI Ethics by philosopher Mark Coeckelberg, the book is filled with tricky questions that make you stop and rethink our relationship with machines—especially the smart ones.

Let me walk you through a few of them, with a few everyday examples to help make sense of it.

Q1: Should We Hold Robots Responsible for Their Actions?

Imagine this: a self-driving car runs a red light and causes an accident. Who do we blame?

The car? (It was the one driving!)

The engineers who built it?

The company that sold it?

Mark brings up a similar question in the book:

“We don’t hold very young children responsible for what they do because they don’t know better—should the same be done for AI?”


 Q2: What are the things that should only be done by humans?

Let’s say AI gets really, really good. It can drive, cook, write poems, teach kids, give therapy—even paint a masterpiece.

That’s cool… but also a little dystopian.

Mark Coeckelbergh raises a deep concern in his book:

“If machines take over everything we do now in life, there would be nothing left for us to do—and we’d find our lives meaningless."

Imagine waking up and everything is done for you. At first, it sounds like paradise. But then… boredom. Disconnection. A sense of uselessness.

We don’t just do things for efficiency. We do them for purpose.


Question 3: What Makes Humans Different?

One big thing that separates us from AI is emotion. We don’t just take in data and spit out an answer.

We feel things—fear, guilt, empathy—while making decisions.

Think about holding the door open for someone. You don’t do it because you calculated that it’s 87% socially beneficial. You just feel it’s the right thing.

So even if AI agents can do tasks for us, they don’t feel with us. That emotional gap is a big deal.


So, Back to the Robot Dog…

Mark doesn’t give black-and-white answers in his book. And I think that’s the point.

These questions are complicated. And answering them tells us more about humans than it does about the robots.


Final Thoughts

AI is growing fast. Whether it’s helping in courtrooms or dancing in your living room, we need to think more deeply about how we use it—and how it reflects our values.

So next time you see a robot dog getting kicked, maybe don’t just laugh.

Maybe ask: What would I do if it were a real dog? Or a person? Or… something in between?



Resources

The Book Discussed

Pantheon - A Show About Uploaded Intelligences

Frankenstein : A Book Worth Reading

Boston Dynamics : Spot















"We don't hold very young children responsible for what they do because they don't know better, should the same be done for AI?"

Also, since we are delegating some of the decision making done within society e.g. who goes to jail or not (COMPAS), then:

 "Who is responsible for the harms and benefits of the technology when humans delegate agency and decisions to AI?"

The way in which models arrive at conclusions should be mapped out from end to end however, we also face the legal side of it where intellectual property is concerned.

When I say "AI" I mean all manifestation of it both digital (think agents) and physical (robots).

2025 is the year of AI Agents and if you don't know what an agent is: it's typically a person that acts on behalf of you.

What makes us different from AI is the nuances of the emotions we feel during the decision making process, it isn't just a simple input and output.





 

Comments

Popular posts from this blog

0 to 100: A Reflection

Scheduling Algorithms

Learning Something New: EDA on Guitars

Make Your Screen Time Matter

Sharks, Dogs and Biases

Value Creation

The Algorithm : Musk's Mental Framework

Key Performance Indicators

Data-Driven Case Study : Barnes & Noble

Data Stacks: Google, Microsoft and Amazon