« Types of friendshipMoldova în literatura străină »


Comment from: Constantin [Visitor] · http://ascending.wordpress.com/
visiglyphWow, that's quite a statement to make ;)
Just like any theory with direct implications in real life, it looks clean and true on the outside, from a theoretical POV, but I doubt the "master" status can really be achieved. Sure, there are people who can endure pain (etc.) but for all I know that level of control may be impossible to reach. This reminds me of the joke where the mathematician sees a fire getting out of control, and instead of trying to extinguish it, he exclaims "a solution exists" and just continues scribbling on his papers.
Another thought about "you can't really break it": apparently the software can screw itself over pretty bad without any damage to the hardware, as in: a person becoming mentally ill after a close one has died. (kernel panic, reboot how?)

Overall I really liked the article, and I'm looking forward to seeing more of this stuff. I think you could write an entire book about the person -- computer analogy ;)
2008-Jan-21, Mon @ 21:31
Comment from: Nae [Visitor]

Completly agree, the sum of our thoughts and decisions could be a definition of each human. But there is a thougt that most of our behaviour is done unconsciously. So i'd say we need to include a small corection to the idea that a person is only its High Level Software....
2008-Jan-21, Mon @ 22:18
Comment from: Alex [Member]
visiglyphFolks, thank you for your feedback.

First, I must apologize, somehow the tags got screwed up and a part of the story was not shown correctly, it is fixed now; it is this part:

...you can keep playing with your software and watching how the objective function gets better (or worse, depending on how successful you are).

If you are advanced, you can use your software skills to deal with some faulty hardware issues as well. Think of it as of a filter driver...

The version that was posted earlier looked like "objective function think of it as of a filter driver" :-)

Back to the actual discussion. You found it interesting because you're both programmers; my initial goal was to make the story user-friendly enough for non-programmers, but I think there is room for improvement.

Indeed, the master status may be something we can reach only asimptotically, or not at all (it could be a concept such as "the ideal gas" or "perfectly spherical horses moving in a vacuum"), but it is good enough for such thought exercises.

But then, we have all these "real stories" about monks in Nepal having excellent mind concentration skills - perhaps they are those who got pretty close to the "software guru" status. Of course, they express it in different terms (spiritual?), but for us, the engineers - things look more comprehensible if shown as a UML diagram :-)

Constantin's example can be countered in several ways:
  • the victim is not aware of the issue (so they don't actively try to fix it);
  • the victim is not "guru enough" to be able to solve this issue; in the context of this article, it's like saying "yes, you can solve that problem, provided that Z='45". Now.. how to make sure that Z equals '45 - that's beyond the scope of the theory :-) A real life example: the theory of evolution explains how primitive life-forms managed to become more complex and more advanced, but it is beyond the scope of this theory to explain how the primitive life forms got here in the first place;
  • the victim is being 'healed' by people who are located in their own 'boxes'. They want to modify the memory that belongs to another box (a different process in the OS, having its own address space) - there is no "legal" way to do that, the only tools are the public functions exported by the box. If the outsiders have full physical access to the target system - they can hack the hardware and make it behave so that the software on top of it starts working properly. Alternatively, they can trick the existing software by crafting a special datagram and pass it to the box via its public functions. The datagram will exploit a known vulnerability and alter the behaviour of the box - like a rootkit does. This point brings us to another level - a software guru that is so advanced that he or she can actually change things in other boxes. A more 'primitive' example is social engineering - people using the public interfaces, sending perfectly normal (but wisely crafted) messages to a target system, and altering its behaviour. There will be another essay about it... For now, I'm just glad that I can express social engineering in terms of boxes, public interfaces and malicious payloads :-)
  • a solution exists, but it is impractical to implement it; the joke with the mathematician is actually very very appropriate in this context. Imagine that you're trying to uninstall KDE's core components using apt; the tool will find the dependencies, and tell you that if you bring down KDE, a zillion other programs will be removed as well. No one who is sane will press 'y' when asked if they really want to delete 90% of their HDD, unless they really mean it.. too many dependencies. So we prefer to leave things 'as is', and build kludges on top of the current (broken) system, because they do not imply the use of such drastic measures.

Nae, I agree with you as well; but I will try to convince you that we are both right, without adding complexity to the existing set of ideas. In fact, this concept can be stretched to fit other things - just add another layer to your "protocol stack" and you're ok (ex: in NT systems you can add a filter driver on top of any existing driver, even the very low-level ones). It's just a matter of drawing a line between "high" and "low". The more advanced you get, the more fine grained your analysis is, and the more levels you can see (ex: we operated with rocks, but then we could split them into small granules, then someone found a way to see the atom, then we went lower - to electrons, and so on; at each step, the definition of "low level" becomes more refined).

The unconscious part of ourselves is "fed" by our conscious part. What you dream at night is a mix of things you see, do, or think of, when you are awake. The ideas with which you operate unconsciously, are the ideas that you harvest while you are aware of what is going on around you.

You can't be afraid of vampires until you read a book about them, or see a movie, or a friend tells you a story about one (the concept of the vampire was processed by the high-level software, before it got archived for use by the unconscious).

Note: maybe we should also get synchronized and discuss the terms - subconscious, unconscious? :-)
2008-Jan-22, Tue @ 00:17
Comment from: Nae [Visitor]
visiglyphYeah a separate topic on this with some basis material provided would bring more light as i don't i'm enough informed on these terms.

In the vampire example.. i'd say that if u'd ask a person if it's afraid of vampires or not, then the conscious part could say an answer but the trouth can be different and can be brought by a dream this is a situation where the unconscious part is ruling the top levels soft... in my understanding at least...
2008-Jan-23, Wed @ 02:40
Comment from: zappepcs [Visitor]
visiglyphVery nice. I might like to add a bit to this. You could add a bit about phantom pains that amputees feel in their missing limbs. How blind people's other senses become heightened. One of the biggest things that comes up when I think of artificial intelligence (see Cmdr Data of Star Trek) is the fact that without social interaction and communication the intelligence doesn't mean much. If yourself is all you have to judge yourself by, you will not think of yourself as smart or not smart.

I like to analyze things in what I term 'failure mode' to see what the real function(s) are. Without social structure and interactions, what is intelligence? Tom Hanks did a bit of this failure mode work with the movie 'Castaway' http://tierneylab.blogs.nytimes.com/2008/01/22/science-explains-wilson-the-volleyball/

As you think of hardware/firmware/software you can think of the complex cascade of finite state machines that I spoke of as how application states are cross fed from one software application to another and to and from firmware. An example: we can be having a great time with friends AND still be dealing with firmware feedback of feeling sick. The combination will drive our actions differently than if we were not feeling sick. You might even convince your friends to go for pizza instead of Thai food because of how you feel.

When broken down to component parts, there is some room to argue that intelligence is nothing more than a measure of how well various parts of the human brain interact. I would also posit that the human brain is not one, but many processors and include several layers of firmware.

Set your alarm on your watch/phone to go off every 90 minutes during the day. Each time it rings, write down how you feel and why. When you repeat a word enough times, it begins to lose meaning and I attribute this to the fact that the 'normal' pattern recognition and FSM interactions stop happening the more you repeat the word.

Our "state of mind" might be described as similar to the combination of browser history, system logs, and 'recent documents list' on a desktop computer. The system logs would include things like "your aunt is coming to visit this weekend' and other things that would not necessarily affect you as you pick out groceries at the store.

Soul? A magic word to describe the complexity of the mind.

All of that is like reverse engineering. The trouble comes when you try to create it from that reverse engineered understanding. We also have to look at stereotypes to make judgements on firmware conditions. Sports figures are not often known as multilingual, cello playing, walking astronomy dictionaries. Why is that? I'm not saying it can't or has not happened, just that more commonly it does not happen. Why is now the question.

The chemical soup inside our skin is very complex and also relays messages. Can you say adrenaline? So, the complex cascade of states goes back and forth, not simply in one direction. This is something I don't think will be explained by message passing tokens, or with the magic word 'firmware' but might be explained by seeing feedback loops on all processes such that the input to a firmware process may be from somewhere in the cascade that is down stream from the firmware process' own output, so not a 100% feedback, but a derived feedback.

I get the feeling that neural networks, several working together and tightly integrated might begin to simulate what I'm trying to describe, but I do not yet know this to be true.

Anyway, nice post.
2008-Dec-08, Mon @ 15:26

Leave a comment

Your email address will not be revealed on this site.
(Line breaks become <br />)
(Name, email & website)
(Allow users to contact you through a message form (your email will not be revealed.)
Notify me when there is a reply to this post