You are currently browsing the category archive for the ‘Meta things’ category.

I guess everyone has heard of the following philosophical dilemma (attributed to Zhungazi): Suppose you dream that you’re butterfly. Now you wake up and you’re a human. But how can you be sure that you’re not just a butterfly dreaming that it is a human? (And would it make any practical difference?)

I’ve often had lucid dreams in my past before, but I don’t remember anything like the dream I had last night. It started out by me dreaming, that I was dreaming, that I was dreaming, that I was dreaming. Though at the beginning I thought it was reality. Then, as various strange things kept happening, I suddenly woke up, or rather I thought I had woken up (though in fact, I was still dreaming, that I was dreaming that I was dreaming). I was glad that I had found a rational explanation for all those strange things but then other strange things kept happening and I kept asking myself: “How can this be now that you’re awake?” And so I eventually I “woke up” again and in my dream I remembered that I had “woken up” before. Now repeat the whole thing again. Eventually, rather than “waking up” I just became lucid, so I still knew that I was actually dreaming and the inexplicable things didn’t bother me anymore.

So now I have the following dilemma:

How can I know whether I’m not just a human, dreaming that he’s a butterfly, who’s dreaming that it’s a human? 😉


It’s really amazing how many simple but fascinating thoughts have never crossed my mind so far. Take e.g. the issue of free will. As pretty much any philosophical issue it’s first of all a question of definition. Probably something along the line of “being able to make choices” comes to mind.

But which of the following systems/things (if any) can actually make choices:

A ball rolling down a bumpy road, a pocket calculator computing the digits of the square root of 2 one after another, a robot in a T-maze (just a single intersection where the robot can turn left or right), a chess program.

It’s actually not as trivial to tell as one might think. At least not if one honestly thinks about it for more than just a split second. “Repeatability” partly comes into it (so that you can verify that the same system would have behaved the same/differently in the same setup). The difference between randomness and choice is also not completely trivial. Somehow, there needs to be a “conscious” decision. But how can you detect consciousness from the outside without introducing a cultural bias (… it has to “think” just like you …)? It also depends on details such as if the chess program “learns” from its past mistakes. The funny thing with many of these phenomena is that when you become very concrete/specific and give very strict conditions on what it means to have a “free will” then suddenly even certain computer programs (maybe using some random sources) have free will, but then you realize this is actually not what you meant by “free will”.

Every day I wonder more and more where this whole “self” which believes to have a “free will” comes from.

Intuitively, I’ve always been convinced that certain truths cannot be understood rationally but must be experienced through non-standard ways of thinking (or rather non-thinking). [And similarly that such truths cannot be taught or explained in the usual sense.] This kind of belief is also at the heart of the Koans (short “meaningless” dialogs or riddles) in Zen Buddhism. But the connection to Gödel’s Incompleteness Theorem (or rather Gödel’s first such theorem) has always evaded my attention.

The Theorem says something like this:
If your system of reasoning is (a) sufficiently powerful (and, e.g., does not only consist of the single statement “a=a”) and (b) consistent (that is you can’t prove both “a” and “not a”), then there always exist truths which cannot be proven/derived/verified within the system. [If your system is not sufficiently powerful it’s boring/meaningless in the first place. If it is inconsistent, well, it’s also pretty meaningless.]

The proof of this theorem essentially uses the self-referential sentence: “This sentence cannot be proven.” If you manage to prove it, you’ve proven a false statement so your system is inconsistent. If you don’t manage to prove it, well, then your system is incomplete as the sentence is true but you cannot prove it. The interesting thing is that such a self-referential sentence can actually be constructed within a mathematical logical system (using Gödel numbers).

What does that have to do with Koans and Zen Buddhism?

Well, it gives a hint, or rather just an analogue, for why we cannot expect to derive all truths by logic alone. The Koans seem to let go of the consistency requirement and, by allowing paradoxes, still seem to help to acquire “truth”. At least if one manages to tune one’s brain into the right frame of mind.

Gödel, Escher, Bach. A great book. I very highly recommend it.

The concept of a meta dance was new to me.

Sometimes, dancers (in dance shows/performances) try to depict something or someone. They try to tell a story. They might transform into animals or even machines. This is all “standard”.

But I’d never seen a dancer trying to create the impression of being … a dancer.

Last Saturday there was a dance show with the title “A quoi tu penses ?” (“What are you thinking about?”). It consisted of a sequence of short modern dances (about 10 minutes each) which were telling the story/thoughts of dancers at various stages throughout their career. Really ingenious idea!

Unfortunately, my French is not good enough yet to fully appreciate all their verbally expressed thoughts, but the non-verbal part was also entertaining on its own.

That’s one of the questions about consciousness discussed in Gödel, Escher, Bach. I agree with the author that the answer is “yes” but I think this is kind of the wrong question to ask and not a good test for consciousness.

I personally believe that the ability to recognize beauty is nothing INNATE (???!), but something we learn, something we are taught. If you grew up believing that the sunset crimsons the horizon, it means that the source of all life demands the sacrifice of human blood, you would probably no longer view the sunset as being beautiful. I also believe that not only the recognition of beauty but also the appreciation of beauty is something we are taught. Of course, here it is more difficult to define, what it would mean for a computer to “appreciate” a nice .jpg file, as this would essentially involve a definition of a “feeling”. But maybe a “feeling” can somehow be defined as a different operational mode. I.e., that a feeling defines certain
rational/computational paradigms according to which our brain operates.
A somewhat related thought: One of the best tests for true artificial intelligence that I’ve heard of is the following:
Ask a computer to explain a joke to you.
But even this might not be the ultimate test as (i) for some jokes even humans have problems, and (ii) with some simple rules/heuristics you can probably “teach” a computer (at least conceptually) to explain simple jokes (involving blondes etc.). Recognition of irony might be more difficult. But, again, here also humans fail regularly. The good old Turing Test would probably be passed by some simple programs (along the lines of Eliza) if the other person involved in the conversation is not used to dealing with computers and/or having non-standard conversations. [E.g., the program could use simple “escape phrases” when it is not sure, what to answer, such as “I’m really not in the mood to discuss this kind of topic.” or the more Eliza-like “Why do you ask this?”]

Anyways, endless topic. Still always interesting to think about, what actually defines a conscious mind, what is needed for self-awareness and whether our kind of intelligence
is really qualitatively different from that of, say, apes. (Of course, we’re smarter but are we more than just “clever apes”? Is the fact that they can recognize their mirror image not sufficient to prove that they have a [low] level of self-awareness?)

Enough random nonsense.

But then, that’s actually part of the point.

Koans are riddles, questions or short fictitious dialogs used in Zen Buddhism. According to my (limited) understanding of Zen Buddhism one of their defining characters is that they usually evade rational thinking. They are meant to be confusing and startling. If they make sense to you, you have probably misunderstood them. Funny beasts they are.

One of the reasons (according to my understanding) for such silly things are that some people believe that enlightenment cannot be “taught”, that you cannot rationally convince someone of the path to enlightenment. Certain things need to be experienced. Another reason is that when you reason about something you’re still separated from the item of contemplation. But enlightenment (or certain interpretations of the term) entails being one with everything.

Ok. Enough metaphysical babble. Here’s some practical advise (in the form of a Koan).

A monk asked Zhaozhou to teach him.
Zhaozhou asked, “Have you eaten your meal?”
The monk replied, “Yes, I have.”
“Then go wash your bowl”, said Zhaozhou.
At that moment, the monk was enlightened.

There are cold flames, flightless birds and also at least one (almost) wordless writer.

But, fortunately for me, there’s always the escape route to the meta level: If you don’t know what to write you can always babble about having nothing to write about.

For the reader it seems to be more difficult to do the same trick. He can’t escape as easily and just start “reading about reading”, because then someone would have had to write about reading – and so you’re just reading on the ground level again.

But he can still try not to play according to the author’s rules by “reading between the lines” (although somehow nobody ever seems to write between the lines … at least my blogging software does not seem to allow it). If that still doesn’t satisfy his thirst for literature he can always go and read his own mind, which might still be more exciting that reading this post  😉


Oxymorons for the masses:


Blog Stats

  • 57,752 hits