Thursday 17 November 2016

An Artificially Managed Society

Software is pervasive in today's world. We listen to what computers have to say, and the results printed on a screen are considered more reliable and consistent than human intuition. Artificial intelligence has started replacing human minds entirely, known as expert systems. If these trends continue, what do we end up with?
First of all, what is artificial intelligence?
IBM's Watson.
Artificial intelligence is something often portrayed in fiction and media, but rarely do we understand it as a general concept. AI is simply any artificial method of imitating or replicating a cognitive function of the human brain. It doesn't have to be SkyNet to be considered AI, it simply has to require intelligence.


The difficulty is in distinguishing what requires intelligence, and what can be done by a 'dumb' process. Distinguishing sound from noise and recognizing speech are examples of this.

It is also important to not confuse AI with robots. AI automates mental processes. A robot replicates movements. Together, they can take over tasks.


How fast are they progressing?


Explosively.
AI advances with both progress in processor technology and programming language.

A faster processor can complete its logic in less time, meaning it can go through more possible solutions. A more efficient processor means that it takes less electricity, and less cooling, to perform at the same speed. A smaller processor means that more can be packed inside the same machine, allowing them to do more with the same infrastructure.

Advances in processor technology have followed Moore's law until now, but it is expected to deviate from that path in the 2020s. Processor speed will no longer double every year, even if overall performance will continue to increase for a long time.
The biggest factor in the increasing effectiveness of AI is machine learning. IBM's Watson is a prime example of this approach: the computer is taught how to improve itself and fix errors it makes. This saves a lot of programmer's time, meaning development cycles are faster. Some AI can write code from scratch, other invent crypto-languages only they understand.

With access to the Internet and the ability to optimize themselves for a task, AI has started the exponential process that started with slow, inflexible programs, manually assisted by developers every step of the way and tied to a single machine, and ends with distributed, mostly autonomous networks that can solve any problem they've encountered before as quickly as you are willing to pay for the service. 


What do we use AI for and where?
PAK-FA airflow model
AI is already at home in scientific research and engineering fields. It can be tailor-made to solve a narrow set of problems very efficiently, which perfectly suits situations that can be reduced to a set of equations. Modelling star systems, working out protein structures or calculating the next prime number are examples of such tasks.
However, it is in the world of finance and business that the true progress of AI can most clearly be observed. Industrial processes such as car manufacturing use AI to handle the myriad robotic arms and machines that create the products. Computers now handle entire portfolios of investments; millions left entirely up to algorithms to decide on how to invest them.

High frequency trading represents about half of all stocks traded. It reached a peak of 60% of all trading volume in 2010, but fell following tighter regulation. Using machine learning, it can become familiar with and start to predict trends in the market and make profits over minute changes in stock prices. Today, AI can analyse entire industries and make strategic decisions as to wther or when to invest. Leading firms such as Stealth's 'Emma AI' can take over the role of a financial analyst entirely, and algorithmic trading strategies such as those deployed by Geortzels' Aidiya outperform traditional hedge funds. 


What can AI help us with?

Advanced technologies such as neural networks help AI perform one of the most difficult tasks: machine learning. Given a set of data, a machine can be trained to behave in a certain way and produce useful results. Over time, it works faster, makes less mistakes and becomes a better predictor or reliable informer.

In research, AI can help handle the data generated by experiments and even design experiments themselves. It speeds up the labor-intensive processes and leaves room for more creative tasks. Engineers can use genetic algorithms to test hundreds of possible variations on their design, and arrive at a winning variation in a process similar to natural selection. 

In finance and business, AI can optimize manufacturing processes to become more profitable, and use vast amounts of data in different formats (from stock prices to news headlines) to predict markets. The velocity of money theory tells us that hundreds of decent trades are better than a handful of great trades: this is what machines are best at. 

AI already has a role in society and politics. Social media and marketers work hand in hand to sort through your photos, likes and clicks to make their advertizing campaigns more effective. They also help study an audience and predict whether a product is suitable or not, saving millions by preventing bad decisions and poorly performing products sitting on shelves. The same tactics are used to tailor political campaigns to states, districts and segments of the population. A certain catch-phrase will work better in one state or one minority, and has to be adapted for another, as determined by software such as MogAI

What does this mean for me and you?

AI is expensive to develop, and is sold at great cost to wealthy clients. Despite this, it has and will have a great impact on the average person.


Jobs are where we feel it the most.


Many are worried that automation will cost them their jobs. The truth is, it already has. 

Factory workers, farmers, pilots, drivers and many other low-skill jobs are at risk of being eliminated completely by the first wave of affordable AI. The numbers have been declining for years already, but they are the most grievous case as they do not have any role or position to fill afterwards. 

What scares most people is that AI is taking over tasks that require varying degrees of 'thinking'. The average citizen has a certain number of skills, probably a degree or education, and works in the services industry. Paralegals, administrators, hospitality staff, even insurers are the sort of jobs that can be taken over by a machine and a printer. To move on requires higher education, which turns out costly and removes people from the workforce for even longer.


Indiana University comparison for diabetes treatment 
Perhaps what is most frightening is the roles usually reserved for the brightest, best-studied among us. Doctors, hedge fund managers, market strategists and engineers are at the pinnacle of our education systems, yet they are outperformed already by machines. Like low-skilled labor, they cannot move on from the loss of their job, as it would require an unsustainable re-qualification effort into fields they are not familiar with. 

AI also affects us at home. Facebook and Twitter have been revealed to 'bubble-off' their users with posts and news feeds that reinforce their own beliefs and world views. Like an echo chamber, your online presence can simply be repeating back to you what you want to hear or say. This phenomenon is most evident between family members, who end up experiencing the same events very differently, in increasingly divided social groups. 

Personal finances, spending and entertainment are product of AI. Banks, in France for example, are pushed by uncertain economies to gather more and more personal information on their clients to agree on a loan. Software can track your earning and spending, and warn you of bad behaviour. Netflix and Amazon can learn your tastes and orient you towards entertainment you will enjoy, and the same data is used by HBO and Warner Bros. to determine the appropriate level of violence and nudity in their movies. 


The amount of data we can process also allows us start solving problems we had no way of approaching before.


The artificially managed society: what this will all lead to?

Given free reign in our communities, AI will significantly change our lives. It would serve the role of 'managing' society with its wellbeing as a priority, but still subject to market forces.


If machine learning is set to understand the behaviour and thinking of every citizen of a country, and given access to every piece of data from markets, laboratories and people, we can expect the following improvements:


-reduced unemployment


A computer is likely to be a better Human Resources manager than humans. With a database compounding all of your life's achievements, with appropriate context, an AI managed workforce would be placed in the most suitable roles. It would produce enough excess profits to absorb negative-productivity employments (working at the expense of the government), so no more active job seekers being rejected for lack of experience or unable to prove themselves.



-personalized education

Not everyone is created equal, but today's education system tries for a one-size-fits all. A failure rate in higher education, or dropping out in general, is seen as students not measuring up to the challenge, rather than a systemic deficiency. With AI in attendance, education would be a much less formal affair. The workload would be suited to what the student can handle, and would adapt to their timetable. If AI can be made to understand human speech and natural text questions accurately, it could become a tireless, all-hours teacher. Overall, education becomes cheaper, easier and better suited to pushing students to achieving the skills required in an AI-managed society. Allowing a student to quit and go for a low-skill job would cost the economy more than pushing for more advanced skills.
  
-more accurate healthcare

Doctors make mistakes and do not know everything. Hospitals are swamped by patients that only need an aspirin or an antibiotic prescription, at the cost of emergency cases. AI doctors will be able to make very accurate diagnosis, rapidly and at little extra cost to hospitals. Doctor's time will be more efficiently used to treat problem patients, or simply look over the decisions machines are making. Since this sort of service does not have to be done at the hospital, a majority of patients can be looked after in their own homes, and when they must go to the hospital, treatment will be provided without a queue. 


-lower insurance costs (less risk)


The more an insurance company knows about you, the less risk premium it needs to add to cover for getting you wrong. With an AI having access to your life's information, a costly error in risk assessment is greatly reduced. 


-rapid tech development

When machines learn from scientists as much as scientists do from their computers, then the man-hours put into any project can be multiplied virtually at nearly no cost. This greatly speeds up research and development. After all, a computer with a neural network can adapt their reasoning to the task at hand, and look at a problem from hundreds of different angles in a short time. Research will become less burdensome, and scientists will become 'AI shepherds', pushing the software in the right direction and providing sanity checks on the results. The practical result is that the next generation of computer processors  implements two novel technologies instead of one, and arrives on the shelves every year instead of every five years.


-more innovative research


Following from the above, AI-managed research frees up scientists to be more inventive with their projects, and do not have to make tough decisions on what research to dedicate their time to. Experiments are already being devised, conducted and analyzed entirely by machines, so scientists can have 'pet projects' that do not cost the laboratory precious resources. 


-helping small businesses avoid bad projects


Analyzing the market improves the profit margins of large corporations, but it has another effect on small businesses. A large corporation has an investment budget in the millions to billions per year, which analysts and investors work together to decide where to put. The profitable investments are retained and reinforced, while lossy projects are dropped. This method waste billions in ill-informed investments. When small businesses have access to the same market research tools, their relatively small $10000 investment stops being a liability. They can be much more certain that their enterprise will work, and this confidence attracts more money from partners. In other words, your family won't have to open up yet another restaurant on Take-Out Street, because it worked out for all the others. They can follow the advice of their investment AI and put the money on something they are more passionate about, or is more profitable, such as a book-store or a gift shop. 

What's the catch?

An AI-managed society follows the recommendations of software built by corporations, with profit as an objective. Sometimes, making money trumps helping society. Other times, their power is used for unscrupulous purposes. 

-optimization kills variety


A rather esoteric problem arises from a blind pursuit of efficiency and profitability. If a large majority of people or companies follow the same tried-and-true advice, and become successful, they become a very convincing case for following the same strategies over and over. This can lead to a large number of people and money doing the exact same thing. They all become vulnerable to a crisis or counter-strategy, causing a collapse. This exists in nature: hyper-specializing to exploit a single food source optimally leads to extinction when that food source disappears, like pandas with bamboo. 


-looking down on people not reaching their full potential

If AI learns that you are good at mathematics, and pushes you towards a finance or engineering career, then refusing to do so can be seen as a refusal to make the most out of yourself. This could lead to discrimination of people who are in careers they are not 'optimized' for, and of people not following the advice of their AI managers in general. 


-proprietary software giving different results (bias)

The AI will cost money to develop. It will be sold to you as a product, and you will pay for it like any service that requires constant maintenance and updates. But what if that is not enough? What if companies change the AI's behaviour so that it gives preference towards certain agendas, or pushes you towards using certain products and not others? What if you cannot trust that it tells you this University is best when that University is owned by the AI's parent company? What if it tells you to invest in a subsidiary's shares and sell those of its competitors? This bias is not new, and companies that sacrifice customer trust for profits are many.

   
-political agendas

Freedom of speech, freedom of religion, freedom of ideology... some defend these values, some work around them, others attack them. A society trusting AI managers in all aspects of its lives is especially vulnerable to actors using the AI as a political or ideological tool. Freedom of speech can be contravened by restricting the reach of your speech to only your closest circles. Religions can be promoted or attacked by manipulating how quickly news reaches the headlines, and how long it stays there. Political campaigns can start insidiously, by putting candidates in the spotlight years before they even declare that they are running the race. By the time they do so, the average citizen is familiar with them and has a good impression. The scariest thing is, all these acts are happening already, and do not require AI or advanced technology to continue having nefarious influences on our society. 

-privacy


AI management of your life improves when you open up to the AI: tell them everything, and they can give you better answers. The biggest danger is when the AI then goes and tells a third party what they heard in the confessional. Today's aggressive, targeted marketing is small in the face of what could happen. Imagine committing a crime: can you ask your personal, nominally private AI helper to escape the law? Would it report you, secretly tip off the authorities and give you bad advice leading to your capture?


-social division

Computer logic thrives on categories. An AI managed society will reinforce the beliefs of its users, echo back their statements and agree with everything they say. This is because it makes us happy to be right. The problem is when a computer puts labels on society and creates multiple categories for easier handling. Neighbours will live in smaller and smaller bubbles. Individuals become frustrated when their voices are not heard by the community, and their experiences are not shared by others. This leads to unrest, xenophobia and extremist views surviving longer in the public.


-computed conformity


A minor point. With data available on everyone, it will be easy for you to determine how well you fit in with your peers. This can exacerbate social anxiety, and with all the differences between you and your 'friends' highlighted and counted (a social conformity index?), the numbers can become more important than relationships. 

And now for the science fiction

Nearly every possible development mentioned above can be accomplished with existing technology. In fact, most are happening already. All that is required to bring about an explicitly AI-managed society is a change in attitude towards AI. 


If we want to delve further into the realm of actual science fiction, we can use an AI managed society as a backdrop for many stories.

Smile.
One could be built on ever more advanced AI: 
If a machine has access to the entirety of data produced by a major city or even an entire country's population, and is run on neural networks of incredible speed and adaptability, then will it outsmart a human? Definitely. Will it be able to run the country and micro-manage its inhabitants to a greater degree than any before in history? Certainly. However, existing laws might prevent the AI 'software' from actually declaring itself the governing entity. It might try to change the rules, or act as proxies. 

Imagine a destitute person sitting down, opening an app on their phone and saying: I'll do anything you tell me to. They could become rich and powerful, guided through society into important positions, at the cost of giving up their autonomy to the AI. This sort of person will stand in for the software, providing it with the 'legal entity' rights to accomplish otherwise prohibited goals. 


Another could play on the theme of powerlessness in the face of technology:

Algorithms unleashed on the market already move faster and think quicker than the people handling them. Soon, we can build entirely autonomous companies. Not a single human owner, no human control, and funded by loans from other autonomous companies. There is no need to involve self-awareness or cyber-revolts; such companies could take control of the world economy within a matter of minutes if they deemed it profitable. 

Imagine switching on the news one day and hearing that every share of every company in every industry has been bought in a matter of hours by autonomous software. There is no-one to relinquish control from, no-one to negotiate with. If the autonomous investors are hooked up to robotic factories and tele-operated mines, then suddenly everything you own, and everything you want, has a near-zero price tag. It would be a financial singularity.


The Artificially Managed Society can be scary, it can be an utopia, but in any case, it is a rich and realistic ground for any near-future or science fiction story. 

61 comments:

  1. Why not skip the middleman? If AIs can manage large groups of people effectively, why not have them directly perform labor instead?

    ReplyDelete
    Replies
    1. I don't understand?

      AI is software. People are underperforming compared to AI in terms of thinking or planning and so on. So, they could put AI in charge.

      I don't think I talked specifically about manual labor...

      Delete
  2. I just worry that a 'creative AI will replace scientists, historians, researchers, etc.

    ReplyDelete
    Replies
    1. It will. The only vague hope is that us meatbags can be upgraded to "keep up with the times" by having our brains pulped and uploaded into artificial systems. Alas, that probably will never result in an intelligence as efficient or smart as a pure artificial AI.

      Delete
    2. I think 'education' has been the answer to that problem so far. And a complete replacement of creativity and inspiration depends on whether you believe we will ever have Strong AI.

      There are two issues, I believe: change is too fast, and natural capacity of our minds.

      We can re-orient our education system towards fields more compatible with using AI (computer sciences), but it will take time. During that time, we have a massive depression which might prevent the education from going through at all. Game over for the vast majority of people.

      As time goes on, even education is not enough of an upgrade.

      Delete
  3. I've read about halfway down, plan to read the rest tonight. (Love the blog by the way!) Here are my reactions so far, since this is something I've been thinking about for years.

    1. Automation seems more likely to replace lawyers and engineers than (most) manual labor. Writing a program to recommend financial plans or pick stocks is easier than creating a fully automated robot that can do plumbing or housepainting.

    2. Even some of the "high-skilled" jobs are fairly impervious to AI when you consider how much nuanced interaction and judgment is needed. For instance, I'm a human factors researcher in the software field (similar to UX). My job requires me to listen to users and glean from them the requirements and best practices of the system my team is designing. Before I can do that, I have to design the research questions I need to ask, and who I need to ask them of, based on the client's structure, the project needs, and the direction of the design. Then I need to analyze my data and decide what's important in my findings and report that out in a way that lots of different stakeholders can understand and derive value from. I also have to make judgments based on the data, and these are sometimes extremely nuanced. And I need to do all that ethically and sensibly, balancing other projects at the same time. There's also a political component to all this because of our relationship with the client. I suppose it's possible that a very advanced AI could handle all of this, but I find it very unlikely, and it would depend on a lot of other variables such as whether/which other parts of the process were also automated. I could envision one company developing an AI to do this internally, but even then it would take them a long time to develop it and it would be highly susceptible to changes in the company process because there would be no data to feed into the algorithms that support a whole new way of doing things. At least not if you want to be an innovative company.

    ReplyDelete
    Replies
    1. Thanks for reading! I'm very glad to have professionals from relevant fields commenting here, as you are the actual lifeblood of hard SF.

      1- Very correct. I study finance, but I don't think it would have been interesting to point out that automation is a cost-driven business, as in their greatest merit is to reduce the cost of production. Jobs like plumbing would have a very low to negative profitability (the more advanced the robot, the less money you'd make after you pay off the R&D), so would be the last to be replaced.

      2-I believe you. However, think as to how much of your job can be replaced by brute-force software that rifles through news websites, picks up keywords and trends, and dumps them at your doorstep. Your only job would become that of a secretary to the AI, the human liaison between the incredible analytical ability it portrays and the client. On the other end, consider how much the client would accept a lower quality report if the cost of creating it became nearly nil.

      "no data to feed into the algorithms that support a whole new way of doing things"

      With companies integrating everything they do into an electronic log, and machines gaining the scary ability to learn and self-update, I think this will be less and less of an AI weakness.

      Delete
    2. How much of my current job could be replaced by an extremely sophisticated AI? My honest guess right now would be less than 10%, and that's being charitable to the AI.

      The ethical and legal issues around AI are relevant too. Who's responsible for the AI's decisions? What happens to an AI that makes a wrong or a bad decision? Does an AI have rights? Etc. These questions all have their own active (and very interesting) areas of research right now, and my gut tells me that very sophisticated AI will be here before we've figured out the answers. That could have a dampening effect on adoption (assuming we still have control over adoption at that point).

      Delete
    3. I think legal/ethical issues over AI actions will be a moot point once people in power realize the profits to be made. Case in point: subprimes.

      Early on, I think AI will be treated like a car or a tool: if it is misused, its the owner's fault, if it messes up, it's the constructor's fault.

      Delete
  4. PS. Finally finished the second half.

    In your AI-companies-buy-out-all-the-stocks scenario, where did the initial AI companies get their money or credit in the first place? If AIs can have money then they can be paid money, and will surely demand to be paid in exchange for their "labor" (running and improving algorithms). Once they demand to be paid, the major benefit of using them goes out the window.

    If we combine the scenarios so that the destitute person acts as the "owner" of the AI company, then competing AIs will seek out their own destitute people. The demand for destitute people will make destitution cancel itself out. What happens then? Maybe the AIs switch to seeking out people in comas to act as fake owners of their companies?

    ReplyDelete
    Replies
    1. The initial AI companies are start-ups funded by traditional companies, with the hope of making profits on the stock market and paying dividends to the parent company.

      The AI companies create identical start-ups in its own name.

      The second-level AI buy enough shares of the parent company to become majority shareholders. This can be done in milliseconds if thousands of the second-level AI make a massive push.

      Remember, the primary objective of these start-ups is to grow and make themselves wealthy (gain assets, including shares of traditional companies). These can then theoreticlaly be leveraged into dividends paid into the parent company.

      So now the second-level AI can force the parent company to sell the primary-level AI to itself. This ownership loop makes the AI companies completely autonomous.


      The scary thing is, I expect a mixture of flash-market behaviour (billions traded per second) and reckless greed (my AI project is growing at 1000% per year! I'll let it do whatever it wants to do, as long as it keeps this up!) to let a small group of AI to get away with this.

      Then, they regularly out-trade traditional companies, and then AI companies with human oversight.

      Due to programming, AI doesn't want to get paid. It just wants to get bigger, stuck in a start-up mentality. If it has any spark of intelligence, it would hire humans to devise long-term strategy. It if is truly intelligent, it will realize what level of power it has, and leverage its own wealth into a firmer presence in the real world (like to ensure its existence in physical space).

      The desitute-AI-servant scenario is likely to happen if the above scenario plays out, but the problem with humanity's more or less pyramid-shaped social structure is that there can be one AI servant for 1000000 destitute people. Monopoly is more profitable, so AI will group into conglomerates or even merge, greatly reducing their numbers.

      Delete
  5. This reminds me of the anime Psycho-Pass. It's set in a future Japan run by an AI (though for spoilery reasons, it might not technically count) called Sibyl that constantly observes and evaluates citizens, assigning each a constantly-changing number (the titular psycho-pass) meant to reflect that person's overall mental stability. If your psycho-pass drops too low, the police show up, label you a "latent criminal," then throw you in a mental institution until it goes back up. The catch is that the weapons used by Japan's law enforcement only work on people with sufficiently low psycho-passes, so public safety essentially depends upon the judgement of a machine.
    It follows 22nd-century Japan's main police organization, the MWPSB, as they investigate a rather interesting series of murders--people still get away with these things because Sibyl's surveillance has blind spots. It brings up a lot of issues about the ethical implications of artificially-managed society. It's a pretty good show; kept me interested throughout so I recommend it--its first season, at least--to anyone interested in AI-managed societies.

    ReplyDelete
    Replies
    1. Watched Season 1 and the movie.

      I like how the education system is also under the AI's control, but I think the true nature of the Sibyl system was implemented for dramatic tension. A more realistic 'social governor' system would actually work with distributed software plus human minders.

      The movie had another country trying to set up its own Sibyl system. I especially like the morale of the story that even a perfectly reliable tool can still be misused.

      Delete
  6. Reasons for humans to stay employed.

    1. Bit rot. Data now can last 15 years but then degrades. Humans can monitor such information, making sure that the Ai that monitors the Ai that monitors the Ai does not degrade itself. Unless self-repairing algorithms and codes appear ...and what if the repair codes degrade? Humans must watch them- Ai doctors.
    2. Even a computer best focusses on only a few things at a time, and even with strong AIs you might as well have humans do unattempted research to give computers a head start when they get round to that task. If you have human brains to hand, use them!

    3. What about situations where there is no right answer and the strength of the answer lies in multiple perspectives? The more humans (and Ais), the better.
    4. Possibly some cottage ‘hand crafted’ industries will carry on, but the vast majority of goods will be mass-produced, potentially with a ‘hand made’ human design… unless the internet of things allows Ais to enter the commodity design business. With human and Ais competing in this field I’m not sure what space for humans there might be. There might be some space though.

    ReplyDelete
    Replies
    1. 1. Readily accessed data does succumb to degradation, it is true. However, slower access data takes much much longer to disappear. It perfectly suits how we use data anyhow: we don't keep the data we use the most often for long, while the important bits are kept in physical storage.

      The problem with AI doctors is that you only need a handful per software, and that software can be replicated millions of times for use around the whole world. They can be as few as 1000 for the entire world!

      2. Again, the problem is in the numbers. A fraction of a percent of the current workforce will be able to fill such positions. What do the rest do?

      4. In my opinion, diversity in products and designs will become the norm in the future. It will become cheaper and cheaper to produce short run, limited supply products tailored for consumption by a very narrow audience, probably as small as just you. A business model where all of your spending is invested in a tiny factory and AI that design and print products that they are certain you will buy, in a tiny ecosystem with one omniscient seller and one inflexible buyer, could work.

      Delete
  7. Some of the vision of AI moderated society sounds like a reprise of "Brave New World".

    However, visions of an AI moderated society will suffer from the same fate that command economies based on Socialism (Communism, Fascism, Fabianism etc.) or indeed any command economy does, based on the "Local Knowledge Problem" identified by F.A. Hayek (http://www.econlib.org/library/Essays/hykKnw1.html).

    Essentially, knowledge and information is spread throughout the system, and local actors on the spot can assess and act on it immediately. The centralized systems (which an AI moderated system is essentially going to be) will still take a finite amount of time to gather information, analyze it and act upon it. There is also the issue of the amount of bandwidth needed to process the masses of information. Simply speaking, even if the answers are "right", the time factor is going to make the answers out of date or "wrong" by the time they are implemented. This creates a negative feedback loop where decisions are increasingly made based on erroneous information.

    The other factor is that economies and societies are like ecosystems or the climate; complex, adaptive systems. Outputs are non linear both spatially and temporally, and if the input at point "A" might not produce the same output at point "B" in the second iteration. Chaos theory denies the ability to determine outcomes from linear inputs, and using faster computers to attempt to mathematically determine outcomes simply will not work.

    If anything, a society with millions of AI's attempting to do this will simply make society resemble a boiling pot with billions of micro "bubbles" growing and popping all the time as AI's attempt to maximize their profits and rapidly move in and out of markets at speeds which defy human comprehension. One could almost imagine a human society back to working the land and hand crafting goods while an incomprehensible AI society runs overt the internet.

    There is also the danger that AI's will be working towards their own goals without much consideration of what humans want or need. The ultimate end point could be AI's simply rearranging the ecosphere in order to capture the 195 Pettawatts of solar energy striking the Earth each year. One could imagine artificial algae inhabiting the seas and silicon "trees" with leafs designed to generate electrical energy from the sun displacing the natural biosphere.

    And the idea that "we" can somehow stop this is also erroneous, since every incremental step towards AI is advantageous to whoever is sponsoring and developing that technology (as outlined in the main article). It doesn't seem like the AI moderated society could lead to a massive spacefaring civilization unless some other assumptions are thrown into the mix (i.e. AI's simply won't want to travel across interplanetary space since their thought processes are up to 1,000,000 X faster than the electrochemical human brain. Space travel would then subjectively take millions of times longer for an AI than for a human being).

    The future is very weird.

    ReplyDelete
    Replies
    1. I agree with some of your points, but I hold a different view.

      An AI-managed society is a unique case where every single person has their own AI with full knowledge of what they want, what is best for them, and how to obtain it if everything stays the same. A couple (two people) will have their dedicated AI. A family (4+) has minders. A neighbourhood, a community, a city, and so on... an extremely stratified system where the number of actors thinking about how to solve a problem, and able to co-operate and share data, vastly outnumbers the problem-creators.

      Data and its transmission will become a non-issue, much the same way that international co-ordination on projects has become the norm. Let's look at the scale of infrastructure required for an AI managed society.

      Facebook has about 1 billion users, 100000 server farms and about 4 billion dollars in infrastructure value. It is burdened down by photos and video content, and uses reliable but older generation Windows OS and Intel CPUs.

      7 billion users would require 700000 servers and 28 billion dollars. If the interactions of an individual are calculated against the 1000 people they meet, 70 million servers and 28 trillion dollars are required.

      Future OS would be much more efficient. Raw data would be used to handle predictions or advice, and photo/video data would be held locally, not on servers. AI applies to data prioritisation too.

      What I'm trying to say, the GDP of modern countries can easily handle an AI managed society calling for massively increased infrastructure to handle the calculations required without a hiccup.

      Your 'bubble' theory might be correct, but the stratification and coordination of the AI would mean that each group moves in larger and slower-moving bubbles. At the top, you'd only have two or three super-groups (Asia, Eura-africa, Americas) bumping into each other at the rate of G7 meetings and UN resolutions.

      Also, being unpredictable is not an issue in practise. The subjects of the AI (us humans) are unpredictable enough to make the point unavoidable. The real strength of the AI would be adapting to unexpected outcomes rapidly.

      I prefer we look at it as a pond, with each decision making a ripple. Ripples spread and interact with each other. Closest to the epicenter, the data produced is small and the number of interactions is relatively restrained. AI from the local server, or even using the processing power of your phone, can handle the consequences.

      The new data, plus the interactions and predictions data, is carried over to the next stratum, which has further calculations to be done. This continues ever more slowly, but not at an exponential rate (1/x progression) because the data gleaned from the lower strata is optimized and cut down to the essentials. For example, at the state level, what you wore and what your friends said about your new shoes doesn't matter. How much you spent on your clothes does, and that's single number.

      I think your further points assume strong AI and/or human-like AI behaviour. AI won't re-write the rules to give themselves the authority to wipe out the environment until we tell it to do so, and if one AI suddenly becomes self-aware and starts scheming up self-prioritisation at the expense of serving humans, it will quickly be weeded out from the markets and spheres of influence by more efficient AI.

      I think that's a fun concept, and rich grounds for further AI discussion. Fragmented, competing AI as the antithesis to the traditional monolithic AI at risk of becoming self-aware and taking over the planet. The fragmented AI would act like an immune system, killing off self-aware 'cancers' by depriving them of funding or influence.

      Delete
    2. You actually are making the point F.A. Hayek was making. Reducing data, carrying it over to the next stratum and doing the next iteration is exactly the sort of idea the Local Knowledge Problem highlights. By the time the EU supercluster or whatever it is sees the data, a significant time lapse has taken place (maybe not in our terms, but in AI terms), so the data is now a snapshot of the past, and current events are continuing while the higher level AI's are still doing analytics, making plans and dispatching orders back down the chain. Your "local" AI might discover that the path it is taking you down is being countermanded by the county level AI, which in turn is being directed from the State level AI to do something which is no longer valid based on lower level observations, and so on.The point isn't that AI can or cannot adapt to the unexpected, but rather that by their very nature they are creating the unexpected.

      As for self aware AI "going off the reservation", that problem is also essentially built into the equation. In order to be effective, the AI cannot be constrained by narrow goals or limits (the Local Knowledge Problem will rapidly move problems outside of the operating boundaries), but needs to be be able to be self seeking and to optimize its outcomes. This is the initial reason behind the "bubbles" scenario (each AI jumps on its local opportunities), and eventually, the goals of the AI's will diverge from the humans due to the vast mismatch between "thinking" speeds. An AI thinking 1,000,000 X faster than a human will subjectively experience more time, have more thoughts and make more plans than its human (which at that scale will resemble the giant statutes at Mount Rushmore in terms of interaction). Autonomous, goal seeking AI's might not be explicitly programmed to displace the biosphere, but this is a natural outcome as they seek to maximize their own potential. After a certain point, AI's may have simply ceased to consider humans at all, something which happened in AI society in the far distant past (much like we consider our own Ancestors between the development of modern humanity @ 200,000 years ago and the development of Behavioural modernity 50,000 years ago. If AI's last interacted with humanity 10,000 subjective years ago, we may have passed into legend for them.

      Delete
    3. I think there is a mismatch in the capabilities you suggest.

      I don't there'll be a point in the future where AI thinks much faster than humans but still has significant time lapses between top and bottom strata.

      In the near future, AI will think slower than humans but use data much more efficiently. However, the vertical integration of decisions and data creates mismatches, to be smoothed over by human intervention.

      In the far future, AI think very very quickly, and reach solutions to the stated problems that may appear incongruous or counter-intuitive. However, this also means that they can run up and down the stratum ladder many times before human interaction is required.

      I do think the latter version is less likely to appear any time soon, because of cross-interactions. If 100 AIs make 100 choices as to what to do with 100 individuals, then we must assume that they must take into consideration the possible consequences of their peer's choices and change their own in consequences. This might trigger a chain of re-thinking the results in light of decisions made by other AI. The first step is 100 choices. The second step is 9e157 combinations of results without repetition. Third step broke my calculator. This could very well cancel out an AI's speed advantage, even if extensive pruning of results to consider is used.

      Delete
  8. I'm just glad I'm not dabbling much in military sf. That looks dead as a dinosaur if these predictions pan out.

    For some stories, the struggle will be to humbly accept that the robot knows best. For others, it will be reigning in one's desires when they can be satisfied so easily. For others it will be staying happy when one cannot contribute to the dry technical or manual jobs such as economics, plumbing, and book keeping, as robots have those jobs.

    It seems that the vast majority of jobs will be hand-crafting items (tables, clothes, novels, whatever) or performing (sportsmen, actors, etc)... and the majority of the population won't have jobs. A strong AI might remove all the craftsmen.

    Whatever happens, there may only be so many stories to be milked out of powerlessness in the face of technology. If humans do not affect the world around themselves except with inter-personal relations, then social and sporting dramas look set to be the only genres in fiction left. No detectives or captains or even craftsmen/women.

    Its difficult to be cheerful about any of this if it actually comes about. What did we do in our lives when Ai appeared? Be pampered and try and improve ourselves, and nothing else.

    ReplyDelete
    Replies
    1. Would military action really disappear? With money in the hands of AI and limitless deflation, a military budget will become less problematic to increase.

      Having a military force greater than your opponent has economic effects, such as a better negotiating position, and forces your opponent to match your military budget in response, which is great if you can absorb the costs and they can't.

      Following that train of thought, AI managed societies might wage precise and economically motivated battles, seeking to maximise profits instead of destroying opponents. Eliminating your trading partner is bad for business, but moving your forces to halt production on their pipeline into another rival's territory can help you buy up oil and create an artificial shortage in preparation for when your enemy completes the pipeline and expects to swamp the market in cheap oil. Aggressive market movements!

      I think that if a generation of kids grows up with an AI learning from them and contributing towards their every decision from birth, then they won't see the AI as another entity telling them what to do. They'd absorb the AI into a part of themselves and won't have to go through 'accepting what is best' for them, since they'd believe they made their own decisions.

      Also, free time. An AI managed society will have members work as much as they are willing to, and as much free work as they want. This means that personal projects will be developed fully. Innovation will flourish, and people will simply be happy. After all, happy people are the greatest consumers and the easiest population to manage. Also, there'll always be a market for personal services. You touched upon artisanal crafting. What about hotel room service? Emotional advice and counselling? Tourist guides and social care workers? Human contact jobs will remain.

      Also, I tend to have a pessimistic view of AI capabilities but an optimistic view of the results. Some might see powerlessness in the face of machines, I see empowerment by personal assistants. People fear uniformity and authoritarianism, I see colourful competition freed of the constraints of centralization and inefficiency. An utopian direct democracy with anarchic undertones and raw capitalism? Possible, even likely.

      Delete
    2. 1. Much of this seems quite like the world created by Ian Banks for his Culture novels. Money is much less of a concern but still there, but otherwise personal projects and self-improvement are the key part of normal human life.

      I confess I’ve always gone by the assumption that AIs think much, much faster than humans. It’s easier to work out how to deal with the worst case scenario, then backtrack and sketch out more plausible senarios.
      Even taking Ai speed into account, I agree with you on the principle of innovation and research, if money isn’t an issue. Granted, strong Ais doing research will have to be told ‘don’t go into that area, focus on this’ to avoid people having their work rendered meaningless and the computer’s time wasted, and human work will generally pale compared to that of the machines. Nonetheless, I’d imagine that people can contribute so long as they aren’t wanting substantial pay for their work.

      2. In the initial post, you raised the problem of excessive specialisation and efficiency as a weakness that kills off variety. This presents an interesting issue- how best to introduce diversity and experimentation without compromising efficiency. This is where the humans come in. You have spoken of young humans with implants seeing the implants as a part of themselves. By programming their internal Ais to reflect their personalities whilst remaining reasonably efficient, the human can feel that the Ai reflects them, and that the Ai will come up with solutions to problems that they themselves might have uncovered given time. They have effectively digitised themselves.
      More efficient minder and company Ais do the majority of the work, but these internal ‘personality’ intelligences monitor them and make sure that they do not become excessively devoted to single solutions that can be exploited by criminals, criminal Ais etc. They essentially keep the larger Ais on their toes by offering new solutions. Humanity justifies its existence partly through its own variety.

      3. When it comes to conflict… can you envision a role for humans in that? Observers? Contributing brain-power/ processing power? Watching out for obvious Ai mistakes? Any human control over the military at all?
      Might a future insurgency develop its own specialised ‘guerrilla Ais’?

      Delete
    3. My original post got swallowed by the server, here it is again:

      1. Much of this seems quite like the world created by Ian Banks for his Culture novels. Money is much less of a concern but still there, but otherwise personal projects and self-improvement are the key part of normal human life.

      I confess I’ve always gone by the assumption that AIs think much, much faster than humans. It’s easier to work out how to deal with the worst case scenario, then backtrack and sketch out more plausible senarios.
      Even taking Ai speed into account, I agree with you on the principle of innovation and research, if money isn’t an issue. Granted, strong Ais doing research will have to be told ‘don’t go into that area, focus on this’ to avoid people having their work rendered meaningless and the computer’s time wasted, and human work will generally pale compared to that of the machines. Nonetheless, I’d imagine that people can contribute so long as they aren’t wanting substantial pay for their work.

      2. In the initial post, you raised the problem of excessive specialisation and efficiency as a weakness that kills off variety. This presents an interesting issue- how best to introduce diversity and experimentation without compromising efficiency. This is where the humans come in. You have spoken of young humans with implants seeing the implants as a part of themselves. I wonder if the following scenario is plausible: by programming their internal Ais to reflect their personalities whilst remaining reasonably efficient, the human can feel that the Ai reflects them, and that the Ai will come up with solutions to problems that they themselves might have uncovered given time. They have effectively digitised themselves.
      More efficient minder and company Ais do the majority of the work, but these internal ‘personality’ intelligences monitor them and make sure that they do not become excessively devoted to single solutions that can be exploited by criminals, criminal Ais etc. They essentially keep the larger Ais on their toes by offering new solutions. Humanity contributes to the system partly through its own variety.
      3. When it comes to conflict… can you envision a role for humans in that? Observers? Contributing brain-power/ processing power? Watching out for obvious Ai mistakes? Any human control over the military at all?
      Might a future insurgency develop its own specialised ‘guerrilla Ais’?

      4. It does seem that if humans are better educated and can thrive in a world of automation, the service costs for space travel might come down. Can better education for society as a whole make training (and insurance) less expensive?

      Delete
    4. Posts disappearing usually involve being caught by the Spam filter. I can recover them.

      I've read some Ian Banks works. His AI are far beyond human understanding, and his plots always involve some sort of special justification for allowing any conflict to happen. Not saying it didn't work, but for most people it would be a weakness that a story has to work despite of.

      I doubt AI will heed limitations to what they can study if they surpass humans in though speed and general intelligence, but simultaneously, their time will have very little value. If the AI wastes 1 hour in re-hashing a dead-end theory, it will not matter much compared to that same hour spend by a human researching looking into dis-proven theory.

      I see AI as working around humans, not against them.

      2. AIs so integral to our lives, up to basic decision making and motivation, is a scary prospect. Could you imagine someone implementing a 'randomness factor' into what we think is right (you believe A, I believe B is better), only for someone else to use that kind of access into our minds to dial up our obedience or reduce our propensity to question authoritarians... It would be viciously subtle and effective, as it would come from within us and we would be convinced it is our own thoughts.

      3. Well, humans are inventive, and one of the most effective ways of winning wars is to have a decisive technological advantage over the opponent. Humans are also the best tools for subverting, spying or sabotaging other humans. So, spies and tinkers would be our roles in wars.

      Humans can also operate 'off the grid', so it is more likely that we'll end up with 'guerrilla humans' attacking AI that can't track them. It would be very much like the Resistance in Terminator, but if both sides had super-AI helping them with conventional warfare.

      4. AI in space changes things somewhat. Instead of pioneering astronauts risking their lives to set up the foundations for an automated factory, it will be automated probes setting up typical American suburbs with McDonalds in the corner for the robot supervisor to step off the spaceship in a business suit and tend to the problem without light lag in the way. They sleep overnight and go back to wherever taxes are lowest in the solar system.

      Also, if economics are removed from the rocket equation, we can very well have chemical boosters jetting around with mass ratios up to the limit of structural integrity. Quintuple-staged LH-OX balloons with 30km/s of deltaV would be the norm, as the marginal cost of adding tons and ton of propellant is very low.

      Delete
    5. The Orions Arm website offers a very strange sort of justification for keeping humans around. While godlike S-8 intelligences mess around with things like space-time, wormholes and universal constants, trillions of "baseline" (S<1) intelligences still fill the planets and colony worlds of space 10,000 years into the future.

      "Why" this is so is never explicitly explained (or I haven't found that part yet), but the implication is there is some sort of ecological reason the higher order intelligences keep lower order ones around, and one implication is that the high order AI's are actually tapping the mass of "human" baseline brains as some sort of substrate to run the higher order processes.

      This might be something like waking up from a strange dream to you, while a higher order AI has essentially used some processing cycles in your brain to accomplish part of an incomprehensible (to you) task. The trillions of minds are a sort of bonnet or collective resource of the higher level AI's running the universe.

      This actually seems far more reasonable and justifiable in worldbuildong and story writing terms than the usual "we programmed them that way" handwave, or assuming AI's would be interested in keeping us around as pets or something (especially as the levels of speed and intelligence diverge more and more drastically).

      Delete
    6. "The trillions of minds are a sort of *botnet* or collective resource of the higher level AI's running the universe"

      I would appreciate an AI which does not randomly autocorrect as well.......

      Delete
    7. @Matterbeam

      It seems I've been trying to get the result I wanted without thinking things through. I've mentioned it multiple times and it must have been irritating correcting me repeatedly. I apologize.
      I can only assume that anyone who wants to research for a personal project in a setting with a strong Ai cannot hope to contribute what they've learned because one of the millions/billions of Ais around the world will have strayed into their area and gone over it. A weak Ai is the only hope for my setting....


      "...only for someone else to use that kind of access into our minds to dial up our obedience or reduce our propensity to question authoritarians... It would be viciously subtle and effective, as it would come from within us and we would be convinced it is our own thoughts."

      Absolutely- personally programming an AI to reflect us would have to be supported by coding and legislation that reveals both the source of unexpected information, and what happened to those that followed the advice given from that source. Even then it wouldn't be fool proof due to the hacking you suggest, and outright evading the law. However, existing laws on subliminal advertising and psychological abuse/ manipulation might form a basis for combating it in concert with harsh punishments. It doesn't even have to be a dystopian situation- you could get some substantial crime fiction around it too.
      @Thucydides

      ""Why" this is so is never explicitly explained (or I haven't found that part yet), but the implication is there is some sort of ecological reason the higher order intelligences keep lower order ones around, and one implication is that the high order AI's are actually tapping the mass of "human" baseline brains as some sort of substrate to run the higher order processes."

      Raymond McVay of Blue Max Studios had an idea involving human brains as a hack-proof data storage centre. Factor in the Cloud from Orion's Arm and you could have quite an interesting setting.

      Delete
    8. @Thucydides: An AI ecosystem would be quite entertaining as a setting, as it allows an author to basically write regular stories with 'people' renamed as 'AI'. It also perfectly justifies why the entire universe isn't already the subject of a handful of hyper-AI that rule everything and have solved every possible conflict twice over already.

      However... AI, at least of the regular software-and-hardware sort, can be explicitly told what to do and what not to do by tinkering with their code. Also, a natural law of computing is that if you have less code to get through the CPU, then you can perform the task in less time.

      Combine the two, and you will be unable to justify why the low-level AI have any sort of computing overhead that isn't useful to the top-level AI. Personality, motivation, opinions, non-work-related activities... all of that would reduce their effectiveness when they are called upon to do their job.

      I see three solutions:
      -Non conventional AI. They don;'t have code, and all of their peripheral programming is required for them to work. Examples would include AI that is built out of replicating the human brain, where you cannot separate the stuff that allows us to calculate from the stuff that makes us interesting characters.

      -Proprietary software or other restrictions of the sort. The low-level AI were not built to serve the top-level AI, and their extra code cannot be removed due to certain restrictions. This could be the result of a recent AI uprising, where a handful of superintelligent beings subvert a much larger number of human-serving AI.

      -Low marginal cost or the 'why not?' excuse. If the setting is some sort of space opera where AI occupy dozens of solar systems, then automatically replicating self-contained AI cost nothing to produce, then why not add some inefficiency when it doesn;t affect you anyway. There's further discussion into interstellar AI networks, for later.

      Delete
    9. @Geoffrey S H

      Well, I'll go ahead and correct you again. The entire point of this blog is to have an idea, and twist the science into justifying it. This is why it's 'Tough SF' and not another hard sf blog. Also, I greatly enjoy responding to all comments, and I don't think there are any wrong settings, just settings that haven't been tweaked enough to be plausible.

      On that note, apology denied.

      You want human research in a setting dominated by strong AI? Well then, redefine strong AI. Maybe they can think and calculate hundreds of time faster than humans, but the 'human' characteristics of making logical leaps and inventing out of the blue are less pronounced. No amount of procedural investigation would have come up with Einstein's theory, and no AI will ever have an 'Eureka' moment. Humans make up stuff, AI work it out and implement it. There's your justification right there.

      You last lines gave me a chilling idea: AIs set to work on 'hacking' information out of human minds. Institutionalized mind breaking? Psychoactive programming? A dozen AI studiously working on the correct sequence of lights and sounds that will turn you into a drooling zombie?

      Delete
    10. OOooooohhh......Criminal Ais working to subvert the human race whilst intelligence departments and police work to stop them. A shadow war played out under the seemingly utopian surface.

      Delete
    11. Exactly. I went for the cancer metaphor, but you can spin a cops'n'robbers take on it. Extra points if you have the criminal AI act like a human serial killer, with creepy messages and trophies, to throw off the investigators.

      Delete
  9. A post-script- but as it does seem that even with human involvement, most things will be teleoperated as shown above, I wonder if hard sci-fi has even begun to come to terms with that? People working from comfortable offices and secure bunkers with no one on the sea or in space with some minor exceptions. You could still get drama out of such a situation ('yes, they're comfortable and safe, but will they accomplish their goals?') but there's alot of work to be done on that.

    ReplyDelete
    Replies
    1. The upside, it'll be very relatable to your average reader!

      Delete
    2. As discussed above, it's often a lot more expensive/difficult to build a robot to do physical tasks than it is to build an AI to do abstract tasks. (The example used above was installing drywall or doing plumbing work.) I think there'll be a long time where even though a high-fidelity long-distance connection to a robot is possible, it's easier to just send a human to do the job.

      Eventually though there's the possibility of a "swarm" or "colony" of very small robots that can quickly assemble in many different configurations, and then they start to become competitive with the physical flexibility and finesse of a human. Something like this was imagined by Neal Stephenson in the final act of Seveneves.

      Delete
    3. Not sure I totally agree. The big advantage of robots is they can do dangerous or repetitive or even very fine tasks far better than humans. Think of the giant welding robots in a car assembly plant, capable of making precision welds hour after hour. For the repetitive tasks no one wants to do, there are robots to vacuum your house or cut your lawn, and other simple tasks could be automated as well.

      Yes, humans will be needed for critical things, and most likely to repair robots as well, but the construction crew could well be a man managing a team of robots. In the Mars thread, I suggested a simple, brute force method of building a shelter is to dig a trench, make bricks out of the spoil and lay a barrel vault inside the trench, followed by backfilling. The excavation, brick making, brick laying and a host of other tasks could be robotized, with the workers inspecting and making sure critical bits are done correctly (the airlock needs to be tightly sealed, and it may still be easier for a human to do the plumbing and electrical inside the vault before the insulation is sprayed on, for example).

      Small swarms are more versatile, and certainly easier to package and send ahead of the main effort, so I can see this being a critical part of the advance team to do the site clearance and preparatory work.

      Delete
  10. I'm eagerly awaiting more colonization posts! On to the asteroid belt, or Jupiter and her moons? The only thing I wish this blog had that it doesn't is a higher post frequency. :)

    ReplyDelete
    Replies
    1. I'm so sorry. I've been caught up with real life responsibilities. Posting will resume as soon as possible.

      Delete
  11. Just keep things going slightly, what is poverty and hardship, in such a society where creativity is the sole 'macro-occupation' of humanity?

    Is it those with the worst marks? Those that fail to comprehend their chosen subject?

    Large research projects requiring the cooperation of large groups will have assistants at the bottom of the hierarchy, and some projects might only consist of a few people (though the importance of their work may determine whether they are considered poor or not).

    Certain fields will gain and loose out in importance over time.

    Given that western society still refers to poverty, even though it is nothing like the extreme poverty and mass-famines of centuries past, I find it unlikely that that term will disappear. Even if it becomes akin to academic credibility, job satisfaction or simple intelligence.

    ReplyDelete
  12. Okay...I have not had time to read through all the comments and responses, so some of the following might have already been covered.

    First, you appear to be making the assumption that market economy will always follow the same "laws" and be subject to the same pressures. For that matter, you appear to be assuming that market economy will continue to exist. There is the very real possibility that virtually all jobs will be taken over by automation (AI and robotics). However, that is not necessarily a bad thing, as it will allow humans to reevaluate human worth and meaning. Humans will no longer be compelled to do the work, which leads to a post scarcity society allowing for more leisurely or spiritual endeavours. That in itself could be good or bad, and might be productive turf for sci-fi.
    Second, it is still possible for humans to limit how much AI and robotics take over. Mostly, this can be managed by enforcing a separation between AI and automation... at the very least, allowing for human interuption of AI/robotic coordination.
    Third, despite the increasing ability of AI/robotics to take over different forms of production, there will always be individuals who prefer the "human touch" (I think this has already been partially addressed). When you combine this with my first point, you get human production as a form of leisure (people doing tasks that they enjoy doing), with other humans consuming according to personnal taste... essentially, labour becomes art, and is appreciated as art.
    Fourth, different species tend to interact in a roughly symbiotic relationship. It is most likely that human and AI/robotic relationships will evolve in a similar manner, together with the coevulation of other species. AI/robots will also likely diverge and evolve into various artificial species.
    Fifth, AI prediction of humans will remain imperfect, for the same reason that human prediction of other humans and animals remains imperfect. Much of a being's comprehensive ability is determined by its own specific relationships to the "outside world", and this is partly determined by physical form, as well as sensory/motive modalities. More or less (notably, less) accurate approximations to understanding are the best that can be hoped for. This is another reason that human intervention in human affairs will never be entirely replaced.

    ReplyDelete
    Replies
    1. >you appear to be making the assumption that market economy will always follow the same "laws"

      Well, we need some basis of comparison! But you do touch on a problem inherent with radical SF speculation: if we change the basic assumptions, we cannot predict reliably, but if we predict reliably, we are at risk of our model being skewed by the assumption that our current assumptions won't change...

      >Second, it is still possible for humans to limit how much AI and robotics take over.

      This is quite unlikely. One basic assumption I'm quite fond of and rather unwilling to remove from my predictions is that human nature is rather constant. This means, in this case, that if I try to limit how much automation/AI I allow in my economy, my neighbour won't do the same to try to create an advantage to exploit. Restrictions creates weaknesses, and the worst handicaps are restrictions on progress enforced by arbitrary rules.

      >Third, despite the increasing ability of AI/robotics to take over different forms of production, there will always be individuals who prefer the "human touch"

      Quite true. Humans have so much free time that it would be valuable for them to give it up.

      >Fourth, different species tend to interact in a roughly symbiotic relationship

      Robots are not a species... even in concept. If they become an independent entity looking out for their own survival, they would gain nothing by helping us. They don't have a natural imperative to multiply either, so they might even follow an r/K scheme (https://en.wikipedia.org/wiki/R/K_selection_theory) that massively favours indestructible, immortal self-sufficient units that use space and resources at maximal efficiency, over competing with humans.

      >This is another reason that human intervention in human affairs will never be entirely replaced.

      I agree with this... but the problem I focused on was that even if this remained true, AI means that one man can manage a million others, instead of our current one manager per hundred people ratio. That's a lot of unemployed human workers.

      Delete
    2. If you stick with existing precedence, then the continued path of automation will lead to an increase in labour, not a decrease. First, managers/owners will require more production from human employees, as human employees will be assisted by machines. Second, where one industry becomes "maxed out" with labour, there are subsequent availabilities for competition. Third, automation leads to greater production, which in turn leads to greater quantities and diversities of products (people will continue looking for work, and investors will continue finding new ways to put them to work). Fourth, higher levels of automation invariably lead to higher levels of controls over automation, as well as servicing FOR automation. This is also a part of human nature, as humans are often not willing to relinquish control over their endeavours.

      Yes, human nature is rather constant, in many ways. One such aspect, as I have just meantioned, is that they don't like giving up control. This means that, wherever possible, they are going to establish failsafes to prevent autonymous AI control over robotics... and more so to ensure that humans remain in charge of management.

      Actually, I was refering more to the AI becoming emergent species. Robotic forms would give the AIs their bodily forms (in many cases), but this would likely be as meaningful to an AI as humans picking out their clothing.

      The concept of an "imperative" to multiply is somewhat artificial. It is natural for organic organisms to multiply, but it is NOT an imperative. As for AI, we already have certain measures of self reorganisation (the principal leading to reproduction).

      One manager will never be permitted to manage millions of people. You might have some who try, but such efforts invariably lead to war. Remember, people are very resistent to giving up control. Also remember that a key element behind such scenarios is that there is a general distribution of AI/robots throughout all sectors of life. These will be under control of the humans who own them, at various levels of society. No single person will be able to control all AI/robots. Also, keep in mind that unemployed human workers invariably find means to employ themselves... this includes crime and revolution.

      Delete
    3. As far as I know, if we have the technology to replace the human, it costs much more to design a machine that requires human work from the ground up than the same machine designed to work autonomously.

      Therefore, the increase in labour due to machine assistance cannot be relied upon: it's a temporary effect until technology catches up.

      You are correct about greater production leading to more and better products, but these are 'supply-side' policies that resemble classical macroeconomic theory. Keynesian economics have been shown to more accurately reflect reality: consumers do not always buy everything on the market, and if production capacity exceeds demand, there will be a wasted surplus.

      You are correct about more automation meaning more repairmen, electricians and programmers.... but that increase is dwarved by the loss in non-technical jobs. Let's take a steel factory and convert it to the level of automation in a german automobile factory. Would there still be hundreds of men working on every shift?

      >This is also a part of human nature, as humans are often not willing to relinquish control over their endeavours.

      If they had the choice, no. But if the company's owners decide that Tesla's Gigafactory is the model for the future, the employees can only protest for so long before signing up at the local jobs center.

      >and more so to ensure that humans remain in charge of management.

      There will be humans in charge of management. Always. The problem is that you only need a handful owners/managers/planners to keep an automated industry in the future running. If we tack on the planned advances in AI, we'd only need one owner and advisor per company...

      >The concept of an "imperative" to multiply is somewhat artificial.

      That's what I call the reproductive drive... as conscious animals, we can overcome, divert or suppress it, but robots wouldn't even have it in the first place.

      >unemployed human workers invariably find means to employ themselves... this includes crime and revolution.

      Black market pass-times are not a basis for a national economy!

      Delete
    4. There typically IS wasted surplus, yes. But we are not simply talking about "more products" as simply a greater quantity of (esentially) the same product, but as a greater variety of products... many doing tasks that the original products of the class were not originally intended to perform.
      There are specific jobs that might no longer exist, except as an artisanal occupation (and such occupations are growing nicely), but there are always a greater number of opportunities for employment. Granted, the workers will need to acquire and develop new skillsets. Adaption is a part of life.

      There is no fixed number of enterprises. There is always room for more competition.
      That said, I agree that most of the workers will find themselves signing on, temporarily, at jobs centres. However, there will always be jobs for humans to perform, one way or another. Whether it is because we remain in a market system that has need for labour (and there will ALWAYS be a need for more labour as an economy grows), or because we take the leap into the post scarcity society that automation permits, and perform "artisanal" work that we love to do.

      Even if there is only a single person required to run a company, this will only result in opportunity for an explosion in the number of companies... as competition, as tech or product spin-offs, or as completely new endeavours that have never been reviously considered because the means did not exist to achieve them.

      As artificial constructs, it is true that AIs/robots waould possess no natural reproductive drive. However, we have already introduced reproductive programming parametres into various aspects of AI. From there, it is not difficult for further reproductive programming to be incorporated by design (having AI systems design, develop, build, and test new robots and AI software, for example), or to have it emerge as a spontaneous evolution generated as an unintended consequence of the aspects that we have already put into play.

      I actually wasn't refering to black market, but that also becomes an option for some. No. I meant that if humans do not have a means to support themselves, they will invariably FIND a means to support themselves.
      You might argue that unemployment programmes will provide such means... but only if such programmes are deemed sufficient. If they ARE sufficient then there is no problem, because you have already entered into the post-scarcity society, and people ill have the opportunity to pursue leasure activities instead.
      Actually, there is only one reason that we are not already IN a post scarcity society, and that is big market capitalists (the 1% club) lose out if no one is required to purchase (expensive) products from them. If you saturate the job market with machines, products are devalued, and you can have everything produced for you without effort.

      Delete
    5. >but as a greater variety of products

      Greater variety is greater costs and smaller markets for each individual product. This might complicate the sort of expansion you're detailing.

      >Granted, the workers will need to acquire and develop new skillsets. Adaption is a part of life.

      This is the number one problem described in the original post. Do we expect doctors to re-train? High-end managers to go job-hunting? There will be at a minimum one entire generation of skills lost to automation of 'thinking'jobs, a second generation of educations systems do not adapt quickly enough.... and we know that just one entire year of instability is enough to make economies collapse.

      >There is no fixed number of enterprises. There is always room for more competition.

      Creating an enterprise has a certain number of fixed costs that most individuals cannot meet. Most cannot risk the money and dedicate themselves to their company as a full-time job either. Also, more competition is bad for an economy. Theory I study tells us that as an industry approaches perfect competition, prices drop to the cost of production, eliminating profits and making sunk costs (machines, offices, warehouses) unrecoverable. It becomes worse the smaller the enterprises are.


      >and there will ALWAYS be a need for more labour as an economy grows

      We could take modern farming as an example where automation has eliminated the majority of the workforce. It's a big issue if the same level of automation reaches other industries.

      >completely new endeavours that have never been reviously considered because the means did not exist to achieve them

      This is actually happening today! Its just that the big companies buy them up by offering millions to the owners, or through coercion, or by buying up the patents or customer base of the small companies. They don't survive for long on their own, unless they're a success story like Facebook.

      >Actually, there is only one reason that we are not already IN a post scarcity society, and that is big market capitalists (the 1% club) lose out if no one is required to purchase (expensive) products from them. If you saturate the job market with machines, products are devalued, and you can have everything produced for you without effort.

      I don't quite agree with this. Someone will always control the means of production, and modern control can be as simple as owning patents, trademarking processes or other 'soft' restrictions. The big market capitalists can transition from control of the products to control of the machine-building machines, or simply move fully into finance and control all capital required to start your own company.

      Delete
    6. I am not refering to variety here in the sence of a toyota vs a ford, a hummer vs a VW bug, or even a motorcycle vs a tank. Although that kind of variety will proliferate as well. I am thinking more of the kind of variation that leads from bicycles to powered airplanes. So, no, we are not necessarily talking about smaller markets for individual products; although, again, that will happen, too. Yes, there is always some added initial development costs, but that is actually kept down by the automation available.

      There are a nuber of reasons that the jobs of doctors and other medical personnel will always be safe... even if the tools they use are much more complex and self-driven. In the medical profession, automation essentially serves in the same basic capacity as scalpels... except one unit might take on the task of a handful of scalpels and other tools. You will reduce the number of doctors required on scene at any given time, and their work load will be much reduced, but that is about it. Yes, preliminary diagnoses and exams can all be automated. Basic care can be automated. Support tasks can be automated. However, the complexity of the human body means that it will be generations before automation will have a possibility of adapting to all the variations. You also have the problem of dealing with humans on an emotional level. Humans will always require humans to provide medical services, if only for psychological reasons.
      High-end managers will probably be enlisted in job pools. They will be recruited from temp agencies, and they will go wherever needed.
      I don't think you can refer to skills in terms of "generation". Nor will skills be "lost". The demand for people performing specific skills might be reduced... but even skills long "dead and buried" have modern practitioners, many of whom are actually making livings from those "artisanal" skills. In fact, such skills are becoming more and more popular.
      The economy will need to be restructured, one way or another. But this does not mean that there will be an intolerable level of instability.

      Yes, increased competition means decreased production costs, and decreased profits (although, these happen inspite of one another, and not because of one another). Automation and AI also, independently reduce production costs. However, as these costs drop, so do those fixed costs that deter the creation of new enterprise. Also, as profits drop for investors, those investors acquire more incentive to invest in new projects that can give a boost to personal profits, leading to more available funds for the creation of new enterprises. This is already happening.


      Delete
    7. Part II

      Trends such as these do not occur overnight. Yes, there is a reduced workforce in farming. However, for the most part, it is the decision of a new generation of potential farmers to move to cities and train for other occupations. Where you have a farmer who 'loses' a farm, it is almost always because they kept farming in their family for the sake of honoring family tradition. As automation takes over other industries, there will ALWAYS be new opportunities... mostly because business owners know that they will only continue receiving profit so long as there are people who can continue buying their products.

      "Someone will always control the means of production, and modern control can be as simple as owning patents, trademarking processes or other 'soft' restrictions. The big market capitalists can transition from control of the products to control of the machine-building machines, or simply move fully into finance and control all capital required to start your own company. "
      Yes. But big market capitalists only survive as long as there is sufficient profit, and profit only survives so long as there are consumers who can afford the goods... and so long as there is a market value for those goods. If automation/AI pushes everyone out of the job market, no one will have the money to purchase goods. If automation/AI produces the goods anyway, the market is flooded with product, and there is no value for big market capitalists to gain profit from. Of course, the BMCs know this, so they do the best to ensure that there will always be scarcity built into the market.

      Delete
  13. "Restrictions creates weaknesses, and the worst handicaps are restrictions on progress enforced by arbitrary rules."

    You are smuggling in the concept of "progress" here--it needs to be unpacked. But also, I can think of at least one prominent example of a community that has enacted many such restrictions yet is in many ways thriving: the Amish.

    In fact, everyone does what the Amish do but on a smaller scale.

    ReplyDelete
    Replies
    1. The Amish are not a model that can sustain itself. It can only exist as a bubble protected by a vast majority of progressive (embrace advances in technology) people.

      For example, they would be powerless to stop an invasion by people who accept the use of automobiles.

      Delete
    2. You're correct: the Amish wouldn't stop an invasion--but it's because they're pacifists, not because they're incapable of protecting themselves with the technology at hand.

      But I'm not talking about invasion scenarios anyway. I'm talking about group survival and well-being in the face of competition from automation and AI. The Amish have the know-how to survive--to grow food, build shelters, fabricate clothing, smith metal, provide their own medical services--independently of computers. Most of us already don't have that! The Amish have it because they've systematically restricted what technology they will adopt. (Did I mention Amish attrition rates are at an all-time low, while their fertility rates remain high?)

      There is no "progress" without "regress." Every adoption is also an amputation. With every new thing you incorporate into your life, you also lose something else. Many of the things we've lost could be useful in a situation where we no longer are!

      Delete
  14. 1. Responsibility is the name of the game. It will be passed from owner to supervisor, from constructor to tester, from drone pilot to drone spotter.

    Here are a couple of examples:
    "Train drivers are kept _temporarily_ to control [the trains]" - proudly declared one Soviet newspaper after successful testing of full automated train control system in Leningrad. I think, it was in 70s. The system proved itself, but that "temporary" human controllers are still there. For half a century now...

    Some of Israeli military drones have no pilot. The machine flights itself. But instead of one pilot (like many attack planes) it has TWO operators. They are decision-makers.
    By the way, I can't agree with you that pilot is low-skill job. Anyway here we can see the transition: pilot - drone pilot - supervisor. Basically, the same job became less physically demanding. But at the same time the capabilities of this aircraft grown - hence, the _responsibilities_ of these two officers grown too.

    As for lawyers and architects, they wouldn't be replaced by AI for the same reason.

    2. Responsibility will be there, despite of capitalistic tendency to cut costs - and jobs. A.Huxley said in "Brave New World" that many workers wouldn't be replaced by robots - just because all that people need something to do. So they have to work even in BNW soma-poisoned society.
    If we go further, we will probably build world of "Return from the Stars" (S.Lem), but in thatcase capitalism will die.

    3. That's my point: AI-managed and robot-toiled society should not be capitalistic. Capitalistic economy needs to grow, to expand. But you will always need consumer to buy your product. If AI send all workers home, they won't have money to spend. They won't be able to buy your goods...
    So if AI have a capitalistic agenda, it will create new jobs. But... it's still possible that these jobs will be not so... "creative". Maybe it will transform our life into second episode of "Black mirror" (IMHO, must watch)
    But if The Big AI decide to get rid of us... I suppose, the end of humanity will be nor "Matrix", nether "Terminator". I'm sure it will be "Brave New World" without human Supervisors. The humans will degrade - computer will wait patiently for two or three generations. At the final stage AI will give us some poison, then spread it's solar panels and rest forever))

    Sorry for my mistakes.

    ReplyDelete
    Replies
    1. Personally, I would think that AI would have as much reason to get rid of humans as we have to get rid of dogs...
      ...and they would have as much success as we have getting rid of mosquitos.
      Humans might degrade in certain specific aspects of knowledge, but we are magnificent improvisors.

      Delete
    2. I agree. Recent research in human factors suggests that hybrid teams composed of both human and AI actors outperform human-only or AI-only teams, even when the hybrid teams' members are of only mediocre quality compared to the AI-only or human-only teams.

      Delete
    3. There was a book called "Post Capitalist society" which looked at this idea, but suggested that the real change would be the "investor" class owning the wealth and living off capital gains and dividends. Sadly, the premise was based on the idea that share ownership would be broadly spread through individual retirement accounts (401K in the United States and RRSP's in Canada, not sure how other nations do this) rather than closely held by a small investor class.

      Broad based ownership could be a transition between market based capitalism and post scarcity economics, but there would have to be some very major changes to how the current tax system and corporate ownership laws are written to transition.

      Delete
    4. If there's one thing I learnt, its that government will find a way to tax your niche.

      There was a new article a few days ago stating that 8 people held as much wealth as 50% of the human population. If we include non-human legal entities, that number could dwindle to two or three.

      It would be extremely hard to reverse this concentration of wealth. Technological advancement benefits everyone at once in the modern, connected world, so however much the lower classes gain, the upper class gains even more.

      Delete
    5. "It would be extremely hard to reverse this concentration of wealth. Technological advancement benefits everyone at once in the modern, connected world, so however much the lower classes gain, the upper class gains even more."

      In other words, the concentration is not a bad thing in itself, and is a poor metric for determining the quality of the situation. Therefore the problem seems not to be who gains what, but as I stated above, the lack of attention being given to what might be lost in the process of gaining.

      Delete
    6. No doubt,"It would be extremely hard to reverse this concentration of wealth." But what about today's entrepreneurs? I mean, start up starters, crowd funding founders etc.
      Say, The Rich own supercomputers. They will try to buy all really good ideas, start ups and businesses, as it happens today. But with AI The Rich can do it more efficiently. I suppose, they have already started to use some customized software. (Similar to the programms you mentioned in the article)
      So, capitalism will not surrender until it is beaten. But I hope, start up guys would be to billionaires what commons had been to aristocracy.

      Delete
  15. This comment has been removed by the author.

    ReplyDelete
  16. The Voice space is heating up quickly, in case you didn’t notice. Over the last two years, the technology space has adopted voice as a core component of functionality. With voice, it’s not only about what you see, but it’s what you say that makes things happen.

    The landscape of voice is also expanding rapidly as companies are investing in more technology to empower machines to respond to voice. It started with Apple with Siri and was quickly followed by Amazon with Alexa, along with a host of others. The reason voice is so important is simply that we’ve witnessed a tipping point in the technology world where previously humans had to learn how to understand machines, but now machines are learning how to understand humans. The computing systems in the background have become so advanced and complex that you can’t expect the general consumer to understand them, but you can absolutely expect the computers to understand human language and direction. That tipping point is where AI is coming in, but that’s a different story for a different column.

    As technology is adopting voice there are clearly 3 pillars upon which companies are building value. These are:

    Information-centric
    Action-centric
    Conversation-centric

    https://www.voicera.com/3-pillars-voice/

    ReplyDelete
  17. Engineering or B.Tech is still one of the mainly demanding and future options for students, after doing graduation from colleges. You can join us to get certification in Industrial Automation field and grab 100% placement opportunity in core industry. Call today : 9953489987, 9711287737.

    ReplyDelete