Know It All, Or Just Part?

Would it be better to have a supercomputer loaded with all the world’s knowledge, or just the “good” knowledge?

An immensely powerful AI engine could be loaded with everything we know, good or bad. It can know about love, puppies, and flowers. It can be told about torture and abuse. Those programming it can set a direction.

Would it be better to go forth relying on something that knows evil, or should evil be programmed out of the AI system?

What would happen if AI only “knew” the best of art, culture, literature, and design?

What if AI only “knew” proven scientific facts and nothing else?

What if AI only “knew” sales and marketing?

Who decides the above?

Comments | 6

  • Deeper Depths

    Good and bad are inadequate terms for the ultimate scope of your questions. Besides issues of moral relativity, there is that much of what is ‘great’ in Art and Literature is transformative. Think of it as ‘Natural Intelligence’ in distinction to AI. E.g.; Guernica, or Moby Dick, or Scarlet Letter, or Othello, even Banksy. They all make light from darkness.

    Computational intelligence such as Wolfram Alpha, or wherever Siri and Watson are headed will evolve. Not sure if that will ever have the capacity to synthesize the gestalt of facts into beautiful nebula of transcendent universal creativity.

    • NSA Spy Company AT&T Is Our Parents

      (Good and bad are placeholders for the general concept, as they are commonly used. Yes they are inadequate, but they get the basic idea across. Perhaps in a bad way that no one should ever know about.))

      Is it ever good for us to not know something? Would we all be better off not knowing something?

      Is it always the case that knowing as much as possible is the best way forward?

      Is there some information that is better than other information? (We prefer people to read classics over trashy novels, and would like people to appreciate symphonies rather than manufactured pop songs, right?)

      Sometimes it is polite to withhold some key bit of information. Letting someone know something might “hurt” them.

      The synthesis that drives our creativity often comes from making mistakes, or breaking rules. If those are programmed out, the creativity might suffer. Where will future advancements come from if mistakes or intentional disruptions are eliminated?

      That leads to an interesting situation – knowing “everything” also implies knowing “wrong” as much as what is “right.” And when to deploy them.

      I suppose we’ll know when an AI bot intentionally withheld information

    • " synthesize the gestalt of facts into a beautiful nebula"


      Lot of questions here with varying degrees of significance.

      Can artificial intelligence implanted into the brain control the mind, which is made up of thought and instinct processes?

      We’re not too far away from having 8-9 billion people running amuck on this planet. The phrase knowing “everything” will never apply to all of them.

      Ai in or on the body and Ai outside the body…what a riot of questions. Can Ai save you? Me?

      Speaking of creativity, imagination…what will happen to them believers…will Ai relegate them to the imaginary realm of mythology?

      • the deep

        It is kind of amazing how much knowledge is available yet not taken advantage of. We humans can’t process a copy of everything immediately, like this super machine I’ve concocted, but some of us don’t even bother looking in books or asking someone a question. Should the machine know laziness? Should we teach one to not be curious at all? : )

        Should the super knowledge machine know mythology? I would say yes, mythology knowledge could produce “good” so it goes in the machine.

        Should it know about religion? What should it know? Who should tell it? The Pope or Stephen Hawking?

        This all comes down to a basic question of: is it dangerous to know something? Is there something we would be better of not knowing? If so, is that the same as censorship and book burning?

        Should we not know how to make guns, or is it better to know how just in case?

        • Let the games begin

          As Vidda points out…there’s just too many people…to know who can or should know…or what is worth knowing…or who should determine who could know…

          I’d say that presently the common denominator in dispensing goods (or bads) is money. How capital will drive who gets to determine who qualifies for ‘an upgrade’ if such a thing as ‘installed AI’ is indeed improvement….is also now out of reach.

          Easier for my mind to frame this in mythological terms…Money = Zeus, and Zeus presides over Celebrity Mythology Death Match to see who comes out on top. Round one:

          In ring #1 – Prometheus(power to the people) vs. Pandora (keep the bad locked away)
          Ring #2 – Hydra (every blight a separate head) vs. Cassandra (she knows all that will happen, but nobody believes her)

          • boardroom bias

            Only the wealthiest will own this tool. Good point. Leaders will use it to shape us.

            Of course, that limits the numbers to a board room or two. Maybe the question should be which corporate board room should decide what gets put into Big Blue’s successors? IBM has started down the path. A corporate origin could lead to some interesting cognitive bias, eh?

            Zeus seemed to care a bit for the concerns of other gods and mortals. It wasn’t just about profit. He wasn’t simply charging admission or fees: “I will smite thee… and here is a bill for my smiting services.”

Leave a Reply