For the past few weeks, I've been trying Freemind and Evernote. Freemind is a mind-mapping program useful for taking notes and brainstorming. I've also been using this to handle a to-do list using the 4 quadrant approach (Sort tasks by importance AND due date) + 5 minute rule (if it takes less than 5 minutes, do it now) although it's not specifically meant for that purpose. I'm trying out Remember The Milk to see if it works better (A possible review later on).
Evernote is a notetaking application. It comes with a handy visual recognition system and automatic online-backup of the notes. I did find it annoying that I had to register (free!) to user the program. It has a wide variety of features and it's certainly better than my notepad approach to taking notes!
I do certainly wish that the good features of both of these programs could be combined into something better. I've been taking notes on this idea in evernote and brainstorming about it in freemind, so you might see a possible program (Definetly free and most likely under an open source license) for this task comming out in the near future.
Tuesday, July 21, 2009
Sunday, April 19, 2009
Why
As of right now there are 1,010,000,000 search results for Why on google. Maybey there's hope for humanity after all...
Saturday, April 11, 2009
Perception
The solution to any problem, the answer to any question, the meaning of life, universe and everything depends solely on our perception.
1+1=?
11 to a writer
2 to a mathematician
10 to a programmer
0 to a scientist
1 to a lover
pointless to a philosopher
profit to a businessman
symbols to a baby
3-1 to a teacher
the root of the whole problem to a politician
Frisbee time to a dog
waste of time, electricity, graphite, paper and human innovation to an environmentalist
whatever you say it means to the ignorant
2-14-2009
1+1=?
11 to a writer
2 to a mathematician
10 to a programmer
0 to a scientist
1 to a lover
pointless to a philosopher
profit to a businessman
symbols to a baby
3-1 to a teacher
the root of the whole problem to a politician
Frisbee time to a dog
waste of time, electricity, graphite, paper and human innovation to an environmentalist
whatever you say it means to the ignorant
2-14-2009
Labels:
1,
AI,
Answer,
Everything,
Life,
Meaning,
Pattern Analysis,
Perception,
Poetry,
Problem,
Question,
Solution,
Universe,
Writing
Friday, January 2, 2009
Controlling Chaos
The Butterfly effect, one aspect of Chaos Theory, says "a butterfly's wings might create tiny changes in the atmosphere that may ultimately alter the path of a tornado or delay, accelerate or even prevent the occurrence of a tornado in a certain location." (http://en.wikipedia.org/wiki/Butterfly_effect)
The effect of small details on the resulting answer causes a huge problem of uncertanity and a huge margin of error that limit the prediction capabilities of anys simulation to just a few iterations. Take for example, the equation n = n^2 with a starting n value of 2
2, 4,16,256,65536,4294967296...
If the starting value was to n = 1.99, the sequence would be like...
1.99, 3.9601, 15.68239201, 245.9374192, 60485.21414, 3658461130....
See how that small change resulted in a very large difference??
Suppose that this calculation was conducted as an experiment. We knew the initial value to be something close to 2. After a few "turns" the value was measured to be 60485. I picked a number a bit far from the initial turn so that the change was measurable. Using these 3 pieces of information, 60485, n = n^2, turn = 5, we can find the initial value by simply changing the equation to n = squareroot(n). Using this, we get the sequence
60485, 245.9369838, 15.68237813, 3.960098247, 1.98999956
Ofcourse, there is the problem of loosing data while backtrackign (more chaos....). I'd suggest finding the midpoint between the measured initial value and the calculated initial value to find a better estimation of the initial value.
(2+1.98999956)/2 = 1.99499978.
If we were to use this newly computed value as the starting point, the sequence would look like:
1.99499978, 3.980024122, 15.84059201, 250.9243552, 62963.03202, 3964343401....
Looking back at all three of the sequences and the benefit of this method:
Actual Values: 1.99, 3.9601, 15.68239201, 245.9374192, 60485.21414, 3658461130
Rounded Values: 2, 4, 16, 256, 65536, 4294967296
Re-Adjusted Calculations: 1.99499978, 3.980024122, 15.84059201, 250.9243552, 62963.03202, 3964343401
The difference: (Calculated - Actual value)
Rounded Value: 4294967296 - 3658461130 = 636506166
Re-Adjusted Calculations: 3964343401 - 3658461130= 305882271
The result that readjusted and recalculated for the initial value was 2X as close to the actual value as the one that didn't. Continually readjusting the initial value and re-calculating will, yes, be very process intensive for large sequences. It'd actually make more sense to simply re-start the calculation with the current point in time as the initial value, but tracking back like this will allow you to adjust the measurements to account for the measurement error. In the above example, it was possible to calculate that the difference was +/- 0.01 by subtracting 1.99499978 from 2. This error can then be plugged into the calculations to adjust for the measurement error.
The effect of small details on the resulting answer causes a huge problem of uncertanity and a huge margin of error that limit the prediction capabilities of anys simulation to just a few iterations. Take for example, the equation n = n^2 with a starting n value of 2
2, 4,16,256,65536,4294967296...
If the starting value was to n = 1.99, the sequence would be like...
1.99, 3.9601, 15.68239201, 245.9374192, 60485.21414, 3658461130....
See how that small change resulted in a very large difference??
Suppose that this calculation was conducted as an experiment. We knew the initial value to be something close to 2. After a few "turns" the value was measured to be 60485. I picked a number a bit far from the initial turn so that the change was measurable. Using these 3 pieces of information, 60485, n = n^2, turn = 5, we can find the initial value by simply changing the equation to n = squareroot(n). Using this, we get the sequence
60485, 245.9369838, 15.68237813, 3.960098247, 1.98999956
Ofcourse, there is the problem of loosing data while backtrackign (more chaos....). I'd suggest finding the midpoint between the measured initial value and the calculated initial value to find a better estimation of the initial value.
(2+1.98999956)/2 = 1.99499978.
If we were to use this newly computed value as the starting point, the sequence would look like:
1.99499978, 3.980024122, 15.84059201, 250.9243552, 62963.03202, 3964343401....
Looking back at all three of the sequences and the benefit of this method:
Actual Values: 1.99, 3.9601, 15.68239201, 245.9374192, 60485.21414, 3658461130
Rounded Values: 2, 4, 16, 256, 65536, 4294967296
Re-Adjusted Calculations: 1.99499978, 3.980024122, 15.84059201, 250.9243552, 62963.03202, 3964343401
The difference: (Calculated - Actual value)
Rounded Value: 4294967296 - 3658461130 = 636506166
Re-Adjusted Calculations: 3964343401 - 3658461130= 305882271
The result that readjusted and recalculated for the initial value was 2X as close to the actual value as the one that didn't. Continually readjusting the initial value and re-calculating will, yes, be very process intensive for large sequences. It'd actually make more sense to simply re-start the calculation with the current point in time as the initial value, but tracking back like this will allow you to adjust the measurements to account for the measurement error. In the above example, it was possible to calculate that the difference was +/- 0.01 by subtracting 1.99499978 from 2. This error can then be plugged into the calculations to adjust for the measurement error.
Monday, September 29, 2008
A working AI: Mission Accompished??
I'd taken a break from all the AI business for a few weeks. I'm surprised at how much I got done working with a clear head. I've run some initial tests through my concept for the Biological Based but not Weights Based Neural Network. The results look unbelievably amazing. The program starts out with absolutely no info and then learns everything quickly and efficiently. Here's what I've added to the log (read Concept AI part 1 and 2 first).
******
Miller's Law: A person can only keep 7 plus or minus 2 items in mind at one time.
http://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/cs11/report.html#Introduction%20to%20neural%20networks
Firing rules to figure out weather a synapse fires based upon the input.
http://www.spiegel.de/international/spiegel/0,1518,466789,00.html
The nerve cells open or close it's ion channels before it fires.
*Going back to some more basic research about the fundamentals of the Brain itself.
http://www.enchantedlearning.com/subjects/anatomy/brain/Neuron.shtml
The brain has about 100 billion neurons. Then there are glial cells, which provides support (what kind?). The sizes of the Neurons differ.
Neruron: Cell Body, Dentrites (signal receivers), projections (axon, which conduct the nerve signal). Axon Terminals transmit electro-chemical signals over the gap known as the synapse.
Dentrites bring info to the cell body. The axon takes information away from the cell body. Bundles of them are called nerves or as nerve tracts or pathways. Dendrites branch from the cell body and receive message.
One neuron has about 1000-10,000 synapses, meaning it communicates with that many other neurons.
Glial cells: Nerve cells that don't carry nerve impulses. They basically manufucture stuff for the neurons.
Basically:
Dendrites: Input Connections (one or more)
Axon: Output Connection (only one). So it's like a linked list where the last elements connects to the first element. One axon can speak to multiple dentrites on the reveiver.
Axon Terminals: (more than one)
Synapse: Space between input & output connection
Hebbs Rule: Cells that fire together, wire together.
A synapse's strength depends on the number of Ion-Channels it has.
9-26-08
6:47
It works! I went through the previously non-working AI program and reworked the whole thing. It's version 0.03 now. I'm amazed that it was able to work so efficiently when I put in the charges concept into it. There are still many lil bugs to work out. I've just tested it with some lettter comparisons. The current data set is just 2 sets of 3 letters. It goes like this:
a b
b c
/end
d e
e f
/end
Then I can go in and reset the charges and add another connection linking the two node-network segments toghether into one.
/rnodecharges
c d
Before those statements, a connection to node a would activate node b and c, but after the statements, it activates b, c, d, e and f (with smaller charges as you go futher down the line.)
The main bug now is that the connection between c d is looking too strong. IDK. Maybey it's supposed to be like that. D is emitting a charge similar to that of a.
10:59. After examining it a bit more, I've found and fixed some more bugs. I'm using a new data set now. It goes like this:
Greek is spoken in Greece
/end
Greece is in England
/end
The Greeks created the Olympics
/end
Athens is the capital of Greece
/end
Sparta is the enemy of Greece
/end
Where refers to the places like Greece Athens and Sparta
/end
After clearing the charges, I ask it:
where is Greek spoken
/end
And I got the result:
/end
/charges
greek 1.0
is 1.0
spoken 1.0
in 2.4935020205414458
greece 3.4648806640625
england 1.4565926569239502
the 0.9368278391792345
greeks 0.43610654105691715
created 0.6253771713864278
olympics 0.6719345918743638
athens 1.8224265625
capital 1.4442937950131418
of 1.4180175011313327
sparta 1.6934484375000003
enemy 1.3801804402274132
about 1.2319124453341073
spartans 1.3034834714624521
where 1.0
refers 0.55
to 0.72875
places 0.75366328125
like 0.7785765625000001
and 0.85331640625
This CLEARLY gives you Greece as the answer to the question. It's amazing how quickly it can learn. All of this was achieved with a minimal amount of processing and with NO INFORMATION (except the /end and the other /commands) hard-coded in. Such clear results with just a single repetition!
I think I'm onto something here. I'm now trying to incorporate larger and larger datasets to see how it'll handle it. Oh, by the way: The dataset is completly made up. I have no clue what the capital of Greece is. This is just an example.
9/27/08
11-34 AM
I'm now working with a data set of 40 Nodes creating 221 Connections (Which becomes 185 when sleep is applied with a 0.001 charge). A basic version of the Sleep method has been created to clean up the weak nodes to increase efficiency. I've also put in a Feedback system (+ or - depending on the answer) that strengthens or weakens the node based upon the feedback. It's still looking amazingly good. Will keep updating!
1:47 PM
Grammar is the real trouble here. grammar words like Of, and, or, an, a, the ect. is connected to a whole bunch of different nodes, so it creates some unnecessary charges.
An Idea for improvement.
Greece refers to a place
Where refers to place and Greece, China, England, America, India
Place refers to Greece
So rework the system so
Where refers to place
place refers to India, Greece, china, england, America ect.
This reduces the number of connections. Adding this to the sleep system will allow you to delete the unnecessary (unacceptably weak) connections and the looping connections.
I'm still amazed at it's efficiency though. It's able to comprehend simple inputs like:
My name is Cobalt
Your name is Ron New
What refers to Name
What is My name
?
Cobalt
What is your name
?
Ron New
Ron New is New Ron spelled backwards and with a w instead of a u.
I've now moved it up a notch to read files. Each sentence must be put in a different line, or else it gets REALLY slow.
The current data set is:
184 Nodes with 1678 connections!
The system resources taken up (At the start of the program):
Java.exe PID: 1472 CPU: 00% CPU Time: 00:00:03 Mem Usage: 348 K
This fluctuates between as low as 384 KB to as high as 10,000 KB. System.gc is called a few times in the program so that the memory is used wisely.
The network is working, but it still needs work. I'm going back to more research to apply new concepts into the network to improve it.
http://www.learnartificialneuralnetworks.com/
Action potentials are the electric signals that neurons use to convey information to the brain. All these signals are identical. Therefore, the brain determines what type of information is being received based on the path that the signal took. The brain analyzes the patterns of signals being sent and from that information it can interpret the type of information being received.
It turns out one aspect of my program, the weakening of signals, is not consistent with the biological system. "There are uninsulated parts of the axon. These areas are called Nodes of Ranvier. At these nodes, the signal traveling down the axon is regenerated. This ensures that the signal traveling down the axon travels fast and remains constant (i.e. very short propagation delay and no weakening of the signal)."
Instead of letting one node send pretty much equal amount of strength to it's connected nodes, how about a system where the signal strength is determined by percentage. This would mean that all charges must originate from the initial charges. The output would then be smaller numbers. This might be good or this might be bad. Let's find out!
Nop, that doesn't work. I don't know why, but it doesn't work. I think I was just lucky hitting the target on that first try after the break. Some small, ever so minute changes can sometimes throw the system off.
*********************
Mission Accomplished?? Not exactly. Answering one question (will this work) leads to a thousand new questions. I'm creating a hand-made knowledge base for it to work out off (some basic grammar concepts and general knowledge). The program can handle any information in any language that you teach it! The charges concept has been accomplished (sorta), so now I have to work on some other concepts to make the system independent. It has to take the answer and then figure out how it's going to be applied or presented as an answer. There are a hundreds of applications for the charge-based neural network concept. I assure you, as soon as I'm done with a stable version of the program, it'll be released as an Open Source Program :) I'm planning on releasing it as part of my other project known as OSCEAN (Open Source Content Environment and Abstract Network) that will put the AI to work in a few logical application. But until then, the actual code will have to be kept under wraps. Just for the thrill of suspense when it's finally released!
But now, please reply with some other types of tests that you want me to run on the system and I'll post you the results. Wish me & Ron (NewRon get it?) luck!
******
Miller's Law: A person can only keep 7 plus or minus 2 items in mind at one time.
http://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/cs11/report.html#Introduction%20to%20neural%20networks
Firing rules to figure out weather a synapse fires based upon the input.
http://www.spiegel.de/international/spiegel/0,1518,466789,00.html
The nerve cells open or close it's ion channels before it fires.
*Going back to some more basic research about the fundamentals of the Brain itself.
http://www.enchantedlearning.com/subjects/anatomy/brain/Neuron.shtml
The brain has about 100 billion neurons. Then there are glial cells, which provides support (what kind?). The sizes of the Neurons differ.
Neruron: Cell Body, Dentrites (signal receivers), projections (axon, which conduct the nerve signal). Axon Terminals transmit electro-chemical signals over the gap known as the synapse.
Dentrites bring info to the cell body. The axon takes information away from the cell body. Bundles of them are called nerves or as nerve tracts or pathways. Dendrites branch from the cell body and receive message.
One neuron has about 1000-10,000 synapses, meaning it communicates with that many other neurons.
Glial cells: Nerve cells that don't carry nerve impulses. They basically manufucture stuff for the neurons.
Basically:
Dendrites: Input Connections (one or more)
Axon: Output Connection (only one). So it's like a linked list where the last elements connects to the first element. One axon can speak to multiple dentrites on the reveiver.
Axon Terminals: (more than one)
Synapse: Space between input & output connection
Hebbs Rule: Cells that fire together, wire together.
A synapse's strength depends on the number of Ion-Channels it has.
9-26-08
6:47
It works! I went through the previously non-working AI program and reworked the whole thing. It's version 0.03 now. I'm amazed that it was able to work so efficiently when I put in the charges concept into it. There are still many lil bugs to work out. I've just tested it with some lettter comparisons. The current data set is just 2 sets of 3 letters. It goes like this:
a b
b c
/end
d e
e f
/end
Then I can go in and reset the charges and add another connection linking the two node-network segments toghether into one.
/rnodecharges
c d
Before those statements, a connection to node a would activate node b and c, but after the statements, it activates b, c, d, e and f (with smaller charges as you go futher down the line.)
The main bug now is that the connection between c d is looking too strong. IDK. Maybey it's supposed to be like that. D is emitting a charge similar to that of a.
10:59. After examining it a bit more, I've found and fixed some more bugs. I'm using a new data set now. It goes like this:
Greek is spoken in Greece
/end
Greece is in England
/end
The Greeks created the Olympics
/end
Athens is the capital of Greece
/end
Sparta is the enemy of Greece
/end
Where refers to the places like Greece Athens and Sparta
/end
After clearing the charges, I ask it:
where is Greek spoken
/end
And I got the result:
/end
/charges
greek 1.0
is 1.0
spoken 1.0
in 2.4935020205414458
greece 3.4648806640625
england 1.4565926569239502
the 0.9368278391792345
greeks 0.43610654105691715
created 0.6253771713864278
olympics 0.6719345918743638
athens 1.8224265625
capital 1.4442937950131418
of 1.4180175011313327
sparta 1.6934484375000003
enemy 1.3801804402274132
about 1.2319124453341073
spartans 1.3034834714624521
where 1.0
refers 0.55
to 0.72875
places 0.75366328125
like 0.7785765625000001
and 0.85331640625
This CLEARLY gives you Greece as the answer to the question. It's amazing how quickly it can learn. All of this was achieved with a minimal amount of processing and with NO INFORMATION (except the /end and the other /commands) hard-coded in. Such clear results with just a single repetition!
I think I'm onto something here. I'm now trying to incorporate larger and larger datasets to see how it'll handle it. Oh, by the way: The dataset is completly made up. I have no clue what the capital of Greece is. This is just an example.
9/27/08
11-34 AM
I'm now working with a data set of 40 Nodes creating 221 Connections (Which becomes 185 when sleep is applied with a 0.001 charge). A basic version of the Sleep method has been created to clean up the weak nodes to increase efficiency. I've also put in a Feedback system (+ or - depending on the answer) that strengthens or weakens the node based upon the feedback. It's still looking amazingly good. Will keep updating!
1:47 PM
Grammar is the real trouble here. grammar words like Of, and, or, an, a, the ect. is connected to a whole bunch of different nodes, so it creates some unnecessary charges.
An Idea for improvement.
Greece refers to a place
Where refers to place and Greece, China, England, America, India
Place refers to Greece
So rework the system so
Where refers to place
place refers to India, Greece, china, england, America ect.
This reduces the number of connections. Adding this to the sleep system will allow you to delete the unnecessary (unacceptably weak) connections and the looping connections.
I'm still amazed at it's efficiency though. It's able to comprehend simple inputs like:
My name is Cobalt
Your name is Ron New
What refers to Name
What is My name
?
Cobalt
What is your name
?
Ron New
Ron New is New Ron spelled backwards and with a w instead of a u.
I've now moved it up a notch to read files. Each sentence must be put in a different line, or else it gets REALLY slow.
The current data set is:
184 Nodes with 1678 connections!
The system resources taken up (At the start of the program):
Java.exe PID: 1472 CPU: 00% CPU Time: 00:00:03 Mem Usage: 348 K
This fluctuates between as low as 384 KB to as high as 10,000 KB. System.gc is called a few times in the program so that the memory is used wisely.
The network is working, but it still needs work. I'm going back to more research to apply new concepts into the network to improve it.
http://www.learnartificialneuralnetworks.com/
Action potentials are the electric signals that neurons use to convey information to the brain. All these signals are identical. Therefore, the brain determines what type of information is being received based on the path that the signal took. The brain analyzes the patterns of signals being sent and from that information it can interpret the type of information being received.
It turns out one aspect of my program, the weakening of signals, is not consistent with the biological system. "There are uninsulated parts of the axon. These areas are called Nodes of Ranvier. At these nodes, the signal traveling down the axon is regenerated. This ensures that the signal traveling down the axon travels fast and remains constant (i.e. very short propagation delay and no weakening of the signal)."
Instead of letting one node send pretty much equal amount of strength to it's connected nodes, how about a system where the signal strength is determined by percentage. This would mean that all charges must originate from the initial charges. The output would then be smaller numbers. This might be good or this might be bad. Let's find out!
Nop, that doesn't work. I don't know why, but it doesn't work. I think I was just lucky hitting the target on that first try after the break. Some small, ever so minute changes can sometimes throw the system off.
*********************
Mission Accomplished?? Not exactly. Answering one question (will this work) leads to a thousand new questions. I'm creating a hand-made knowledge base for it to work out off (some basic grammar concepts and general knowledge). The program can handle any information in any language that you teach it! The charges concept has been accomplished (sorta), so now I have to work on some other concepts to make the system independent. It has to take the answer and then figure out how it's going to be applied or presented as an answer. There are a hundreds of applications for the charge-based neural network concept. I assure you, as soon as I'm done with a stable version of the program, it'll be released as an Open Source Program :) I'm planning on releasing it as part of my other project known as OSCEAN (Open Source Content Environment and Abstract Network) that will put the AI to work in a few logical application. But until then, the actual code will have to be kept under wraps. Just for the thrill of suspense when it's finally released!
But now, please reply with some other types of tests that you want me to run on the system and I'll post you the results. Wish me & Ron (NewRon get it?) luck!
Saturday, September 27, 2008
Levels Of Thinking
Levels of Thinking
1. When your environment is unfavorable, you spend most of your time thinking about the basic needs for survival: food, water, air.
2. When you have food, water and air, you think about other things that make life easier: Electricity, Technology, Science, Mathematics
Suppose that that goal has been achieved as well, and somehow, everyday-life was made as easy as it possibly could be. Let's say, we have robots that cook for us, clean for us, farm for us ect. We won't have to do anything ourselves.
3. This will lead us to a new level of thinking that'll bring forth a new era in Human History. I'm not sure what that will be though.
But, will we want to be just idle humans?? Will we want to waste our lives consuming without producing? I believe that Boredom makes us human. We are not satisfied by what we have. We feel that we are endowed with a higher purpose to do something else. Always, something else. So it just may be that we humans will never reach that 3rd level of thinking. But an Artificial Intelligent program, with no needs, goals or desires, might be able to...
1. When your environment is unfavorable, you spend most of your time thinking about the basic needs for survival: food, water, air.
2. When you have food, water and air, you think about other things that make life easier: Electricity, Technology, Science, Mathematics
Suppose that that goal has been achieved as well, and somehow, everyday-life was made as easy as it possibly could be. Let's say, we have robots that cook for us, clean for us, farm for us ect. We won't have to do anything ourselves.
3. This will lead us to a new level of thinking that'll bring forth a new era in Human History. I'm not sure what that will be though.
But, will we want to be just idle humans?? Will we want to waste our lives consuming without producing? I believe that Boredom makes us human. We are not satisfied by what we have. We feel that we are endowed with a higher purpose to do something else. Always, something else. So it just may be that we humans will never reach that 3rd level of thinking. But an Artificial Intelligent program, with no needs, goals or desires, might be able to...
Sunday, September 21, 2008
Still here!
Yep, Still Here! I've been caught up with school the for the past few weeks and Hurricane IKE left us without power for 7 days! I also had school off the entire week, but without power there was nothing else to do except think (like the good ol` days. Actually think for a while!). I'll be posting all about it in my next post. Stay put!
Subscribe to:
Posts (Atom)