Artificial Anthropologies

We've all dealt with "default settings" in software which sit below the radar, and sometimes reset themselves without our knowing. Facebook is a prime example.

What happens when we cede control to machines, or our self-driving cars, and the so-called "morality" settings that may control them, e.g. "swerve to save the human, not the gorilla", and realize later that something has reverted to a "default".

How will machine-learning algorithms take the perceptual blindness test? If a car detects a Halloween trick-or-treater in a gorilla costume and someone without a costume, will it hit the gorilla? Settings would have to confer a huge amount of trust, or we would have to continually check them.

What are the cultural aspects of machine vision? Is it more like a "language", functioning as a top-level controller of how systems behave and respond? What is the difference between an autonomous system in Asia as opposed to the West?

For every anthropological aspect that we deal with now, there would need to be some machinic version of it.

***

Post-script:

It's interesting to apply human universals to machines. How could these be incorporated into robotic entities ("Machine Universals")?

http://humanuniversals.com/human-universals

conjectural reasoning
contrasting meaningful elements in language
language employed to manipulate others
language is translatable
language not a simple reflection of reality
linguistic redundancy

Popular Posts