#{BitKrafted}
bit \'bit (noun)
Etymology: binary digit
1 : a unit of computer information equivalent to the result of a choice between two alternatives (as yes or no, on or off)
2 : the physical representation of a bit by an electrical pulse, a magnetized spot, or a hole whose presence or absence indicates data
craft \'kraft (tr. verb)
To make or produce with care, skill, or ingenuity
Monday, June 04, 2012
CloudFlare not LulzProof?: Today's Attack; Apparent Google Apps/Gmail Vulnerability; and How to Protect Yourself
It Looks Like You're Trying to Visit a Webpage. Would You Like Help?
"Clippy may be dead, but it’s going to be a long time before Microsoft lives him down.
Tuesday, May 15, 2012
Javascript FTW: JS GameBoy Color Emulator
Has Patriot Hacker The Jester (th3j35t3r) Been Doxed?
Saladin – Full Disclosure…. Leonidis not so much.
von th3j35t3r
‘The worst enemy a person can aquire, is the enemy he once considered a friend.’ – Me – 2012additionally….. and in complete back to back contradiction as we all know… I never double dip my quotes.‘The enemy of my enemy is my friend’ – Unknown.So the usual suspects… the boys at reapersec (lowercase intentional) are co-ordinating and finding themselves allies. Its funny because I was informed of an organized attempt to discredit me that would require a prescribed reaction from me over 18 hours ago. (Scot and crew).I will be speaking up once again right here in reference to the direct challenge issued. Make no mistake it’s not their first attempt. But when I tell you who they are, and you check out their MO and the sad troll-festers that RT then, you might begin to understand.For now, I was hoping for a day of peace…. ya know – so that is what I am pursuing.
Upcoming and right here on this URL, they forced my hand, so let me tell you all about #saladin.To be continued…….. within 24 hours…. meantime, I am gonna have that downtime I talked about.Peace.
J
Tuesday, January 24, 2006
HA!...Now that's a bit rich coming from George "Dubblya"
Michelle Malkin: CERTAIN UNALIENABLE RIGHTS: "A Proclamation by the President of the United States of America
Our Nation was founded on the belief that every human being has rights, dignity, and value. On National Sanctity of Human Life Day, we underscore our commitment to building a culture of life where all individuals are welcomed in life and protected in law.
America is making great strides in our efforts to protect human life {TIGGR: ???? i almost believed it, for a split second..then another bomb dropped in the gulf}. One of my first actions as President was to sign an order banning the use of taxpayer money on programs that promote abortion overseas. {TIGGR: Nice work GW...!!...save the babies for YOUR soldiers to kill!} Over the past 5 years, I also have been proud to sign into law the Born-Alive Infants Protection Act, the Unborn Victims of Violence Act, and a ban on partial-birth abortion. In addition, my Administration continues to fund abstinence and adoption programs and numerous faith-based and community initiatives that support these efforts.
When we seek to advance science and improve our lives, we must always preserve human dignity and remember that human life is a gift from our Creator. We must not sanction the creation of life only to destroy it . America must pursue the tremendous possibilities of medicine and research and at the same time remain an ethical and compassionate society {TIGGR: REMAIN implies an inertia from an existing state...America would need to NOW BE. "an ethical and compassionate society"...SCOFF!}
National Sanctity of Human Life Day is an opportunity to strengthen our resolve in creating a society where every life has meaning and our most vulnerable members are protected and defended including unborn children, the sick and dying, and persons with disabilities and birth defects. This is an ideal that appeals to the noblest and most generous instincts within us, and this is the America we will achieve by working together.
NOW, THEREFORE, I, GEORGE W. BUSH, President of the United States of America, by virtue of the authority vested in me by the Constitution and laws of the United States, do hereby proclaim Sunday, January 22, 2006, as National Sanctity of Human Life Day. I call upon all Americans to recognize this day"
Thankyou Geowrge W........an insightful look into the warped opinions of th most dangerous man on earth.
Thursday, September 01, 2005
Bitkraft takes another step forward - No Base Class Required!
click here! to go straight to the new No Inheritance Demonstration!
Monday, August 29, 2005
Bitkraft, DotNetNuke, Flesk and others..
In Short, YES Bitkraft can work with other frameworks such as DotNetNuke and Flesk.Accelerator - it is just a matter of chaining together the appropriate inheritances in the Bitkraft Framework. So, as a starting point, tonight (28th August - Aus Eastern Standard Time) I will be releasing another version of Bitkraft (100% backwards compatible as always) that will include a NEW PageTemplate Class (Bitkraft.Web.DNNPageTemplate) that is designed to allow you to create DotNetNuke content for your portal that uses the Bitkraft Framework - WOW!....
If you have any queries regarding this process, success stories of using Bitkraft with other frameworks or suggestions for other frameworks that could be catered for by the Bitkraft Framework (eg Flesk.Accelerator) please contact Mr.Tiggr@gmail.com or post comments here!
Cheers - TIGGR.
################################################################
Hi There [UserName Clipped},
Thankyou for your interest in the Bitkraft Framework, i am always very interested to hear new techniques and ideas on how it may be used - your query regarding it's compatibility with DNN is an interesting one.
In short, as it stands currently, the Bitkraft Framework will not operate with DNN as it uses the same technique of Inheritance by inheriting from the DotNetNuke equivalent of the Bitkraft.Web.PageTemplate and it is not possible to inherit from multiple classes in this manner (that is to say you cannot inherit from Entities.Modules.PortalModuleBase and Bitkraft.Web.PageTemplate at the same time).
This is not to say that the two cannot work together though! - as i am in control of the source for the Bitkraft Framework, i can indeed inherit from Entities.Modules.PortalModuleBa
The short end of the story is that i have taken your query and the queries of others on-board and will be making a new set of Source and Binaries available this evening (AUS. Eastern Standard Time). I will be sure to email you with details of where they are availble from but the end result will be that, instead of inheriting from either Entities.Modules.PortalModuleBa
I hope this is a timely and acceptable solution to your query and i will be in touch as soon as i have completed the final compilation with details of availability - i do ask that you report back to me with any information you have on how it goes - i am not running DNN on any of my production servers so am unable to test the full features if "Bitkraft in DNN" in a "Live" environment.
################################################################
Hi there [UserName Clipped],
Thankyou for you enquiry regarding the Bitkraft, i welcome any and all comments and queries and try to address them all as promptly as i can.
In regards to using Bitkraft with the Flesk libraries, i can only offer the same advice to you as i have to users of other products such as DotNetNuke. Bitkraft and Flesk, use the same technique for implementing their functionality - Inheriting from the Web.UI.Page class. In Fesk, the base class appears to be Flesk.Accelerator.Page , in DotNetNuke it is Entities.Modules.PortalModuleBa
I saying this however, the advantage of Bitkraft being an Open-Source solution means that we can actually CHAIN these inheritances, thus making Bitkraft compatible with almost any other framework including, i assume, Flesk... This would simply involve obtaining a copy of the Flesk assembly (dll) that you are using, obtaining a copy of the Bitkraft Source and creating a modified version of the PageTemplate class that inherits from Flesk.Accelerator.Page rather than System.Web.UI.Page and recompiling.
The end result of this would be a new Template class (perhaps called Bitkraft.Web.FleskPageTemplate) to inherit from. To use the new template and get both the power of Flesk and Bitkraft in your Web Applications, all you have to do is to inherit from Bitkraft.Web.FleskPageTemplate instead of Flesk.Accelerator.Page!!!
If you would like to attempt this yourself, please feel free to obtain the source code for the bitkraft framework from: http://www.tiggrbitz.com
Should you wish me to undertake this task, please forward any relevant information (particularly a copy of the Flesk assembly which i will obviously destroy after i have compiled the new Bitkraft assembly so as not to violate any of the Flesk licensing agreements) - I aim to make a new public release of the Bitkraft Framework with DotNetNuke compatibility built in Tonight (Aus. Eastern Standard Time); forwarding this information ASAP will ensure that the new release of Bitkraft contains a new page template (Bitkraft.Web.FleskPageTemaplate) for you to work with!
Thursday, August 18, 2005
Bitkraft - An "AJAX" Library for .NET with a difference
What is Bitkraft?
Bitkraft is a CLR based (.NET) web framework that allows distributed web content to be created and served in a unique fashion. It is written in C# and compiles for operation under the Microsoft .NET Framework 1.1+ or the .Mono Framework, making it portable to almost any platform.
At it's core, the Bitkraft framework extends the ASP .NET Architecture to fully support Javascript-based Server callbacks using the XmlHttpRequest object as a transport layer in a fashion referred commonly today as AJAX (Asynchronous Javascript Over XmlHttpRequest). There are many "AJAX" fameworks available today however, the Bitkraft framework is unique in the way that it seeks to blur the lines between client (browser) and server and the manner that it allows the development of truly Smart Web-Based applications that intelligently distribute their functionality between Client and Server in a seamless manner.
Indeed, Bitkraft deliberately tries not to describe it's technology as "AJAX" based because of the connotation of the use of XML (a'la SOAP/Webservices). Bitkraft does NOT use XML, instead JSON (JavaScript Object Notation) is used as the main transport for communications between client and server. Using JSON as opposed to XML for message formatting produces a lighter-weight message and also has the advantage of being a native format that can be accessed as a real object by most modern clients (browsers). the Bitkraft framework translates CLR Types directly to and from the JSON format resulting in objects that behave and appear the same both at the client and at the server.
The Bitkraft framework allows web content to be developed in a single environment and promotes the distribution of functionality between the client and the server. It allows objects to be created that behave predictably regardless of whether the implementation is being run at the client or at the server and allows objects to expose methods that are implemented either on the client or on the server without re-posting or rendering page content. This approach reduces the size and quantity of round-trips to the server, updating the content provided on a single page by requesting it on-demand from the server instead of relying on full -page re-posts and re-rendering of complete pages.
The end result?.....
- Web content that is dynamic and fast!
- A way of developing web content that hides the complexity of communicating between client and server from both the developer and the user.
- A whole new breed of web applications that run inside the ever-popular and familiar Web Browser but break away from the mould of traditional a "Network Of Pages".
- Not just Web Pages but real Web Applications.
- Simply; a smarter web framework.
Tuesday, August 10, 2004
CHO THEORY: Sleep Boosts Ability To Learn Language, University Of Chicago Researchers Find
"Sleep has at least two separate effects on learning," the authors write. "Sleep consolidates memories, protecting them against subsequent interference or decay. Sleep also appears to 'recover' or restore memories."
Scientists have long hypothesized that sleep has an impact on learning, but the new study is the first to provide scientific evidence that brain activity promotes higher-level types of learning while we sleep.
Although the study dealt specifically with word learning, the findings may be relevant to other learning, Nusbaum said. "We have known that people learn better if they learn smaller bits of information over a period of days rather than all at once. This research could show how sleep helps us retain what we learn."
In fact, the idea for the study arose from discussions Nusbaum and Fenn had with Margoliash, who studies vocal (song) learning in birds. "We were surprised several years ago to discover that birds apparently 'dream of singing' and this might be important for song learning," Margoliash said.
PHILOSPHY: Interesting and uninteresting questions about torture
I would love to answer "No", but it's a complicated question. The standard arguments in favor of torture are well known. Imagine we are in a situation of imminent peril to a very large number of people (a "ticking time bomb," literally or figuratively), and we know for sure that a certain prisoner has information that could be used to prevent the disaster, and strongly suspect that the prisoner would give up the information under torture but would not under conventional interrogation. That's a lot of conditions that must be satisfied (1. imminent danger to 2. a very large number of people, 3. knowledge that prisoner has crucial information that 4. they will not give up without torture but 5. they might give up under torture), but I would add at least one more: 6. the prisoner must, by previous actions, have forfeited even minimal personal rights, e.g. by committing some egregious crime. I don't think it's right to torture an innocent bystander who happened to overhear a terrorist plot but for some reason doesn't want to divulge the information. If all of these circumstances clearly applied, I would be willing to concede that torture would be justified. Under ordinary non-desperate conditions, I strongly believe that every person has a minimal set of rights that society has no right to violate; but under well-defined emergency conditions, the interests of the larger group can reasonably take precedence.
The problem, of course, is that such stringent conditions rarely apply. I used to be in favor of the death penalty, as I believed that there were some people who had, by their behavior, given up any right to live. I still believe that, but now I am strongly anti-death-penalty, only because I have no confidence whatsoever that our justice system can accurately determine who those people might be. Even the chance of one mistake, putting someone to death who was innocent (or even not as unforgiveably guilty as had been supposed), makes the use of the death penalty completely unpalatable. Similarly with torture -- the danger that it could be used against people who do not meet all of the above criteria is real and terrible. Of course, with the death penalty there is a straightforward alternative (life imprisonment), whereas in the shadow of a ticking time bomb the choice may not be so clear.
PHILOSOPHY: Present Living, part deux
#{TIGGR} - Again NOT my work but a great topic..i kinda am a bit obsessed with this topic at the moment - taking up lots of my spare intellectual bandwidth!...
Present Living, part deux
Present Living [part un], Luke's "Past, Present, and Future (response).
Luke's post makes me want to speculate as to what the difference could be between two people (besides claims to being screwed up) that leads them to have [on the surface] similar present-thinking philosophies, but for one to need (?) a belief in other-worldly permanence and the other not to:
I do believe there are permanent things in the world, but I can not empirically prove it's existence. I believe that the Creator has existed for all time and will continue to exist long after this world has been erased from the drawing board. I have faith that when I die my soul (spirit) will continue to live on as a unique individual like I am now and that I will spend the rest of eternity in bliss.
Perhaps it is my belief in the spiritual that frees me from the need of permanence in this world...then again I could just be screwed up in the head. You decide.
My first thought when I read this was a laughing, "Hey, that's cheating!"
But it does raise an interesting point that I didn't address in Part un--spirituality.
I have this idea/philosophy that because of the inherent subjectivism of existence (stay with me, I'm not heading into full-blown subjectivism...), the only way that I can know the world as well as I can is through my senses and whatever I can garner from others' perspectives. Where is this relativity? It comes primarily from the natural human limitations of our sensory organs and our brains, and is built upon by the unique differences in abilities and experiences between individuals. Combine my poor eyesight, hypersensitivity to touch, my belief system, and the intricate web of my life experience, and you get an interesting "filter" for life.
Now imagine the possible permutations across the six billion-ish people (?) in existence presently.
Whether or not there is some "absolute" reality outside of our perceptions is up for debate and is actually rather irrelevant. What I can perceive is what there is. What someone else can perceive is what they have. With "perceive" extending to opinions/faith in the supra/supernatural, since emotions, thoughts, and experiences (that whole brain thing) are a crucial part of the "filter".
My goal, of course, is to widen my perceptions by learning about possibilities for alternates. (Why? Because it's something that draws me in and I've learned a lot of useful things this way.) For instance, I remember one issue that was brought up in the Theory of Knowledge class I took in high school was the fact that all of our scientific detecting equipment is, by requirement, based upon our five senses--we have things to make us see farther or with more depth, to hear things in ranges our ears don't hear, and to preserve records better than our brains' memory ever can. But the idea of equipment that detects based on some type of input that we don't even have a sensory organ for boggles the mind. How could we even develop such a system, since we have no concept of any other type of input other than what our five senses affords us? How would we interpret the data? We turn infrared spectra into visible colors based on temperature--so our perceptions of infrared are in terms of temperature (touch) and [artificial] color (sight). All that's doing is intensifying our sense of touch (effectively) and translating it into something visible. Not stepping outside of our boxes in any kind of of revolutionary, new-sensory input type of way, but widening our perceptions. And in such a manner, we think we can gather more information about the world around us.
I would maintain that this can be done on a personal, day-to-day basis. When I listen to someone tell me something about their history, this is some event that was filtered by that person when it occured, is refiltered everytime they think about it, was refiltered again when they vocalized it, and was filtered by me upon hearing it. That's a hell of a lot of "filters", but that's what's interesting: my immediate concern is what my and the speaker's current filters are and how they developed.
And how in tarnation does this relate to that old "present living" idea? Because the interesting challenge in learning about people is that all I can really get is a hint of a snapshot of the other person's current state, even when discussing their history, and this current state is constantly changing. And, because [to me] this is the only way I can and do see the world, everything is ephemeral, and what matters is the here and now.
PHILOSOPHY: Past, present, future
Past, present, future From Lissa-Love's 'Present Living' post....
#{TIGGR} - Well, again i disclaim my own work here...this is not my doing: thanks to Luke over at “Luke Says Moo!“ for this addition to a previous topic hosted here...here is an alteraative view on “Present Living“
I thought I'd give a little follow up because she keeps harassing me about it...and because it is something to think about on occasion.
Anyway, I classifymyself as someone who lives in the present. I don't dwell on the past or the future. I learn from my experiences and relish history when I have a chance to take it in. I plan for the future in some ways, but I don't plan for what my life will really be like.
Normally I move with the flow and do what needs to be done as the tide of the day brings changes with each ebb and flow. I know that I have little to no control over the actions of those around me and throughout the world, so I do my best to work with what I'm presented and go on from there.
Which brings me to thoughts on permanence. Lissa suggests that people living in the past and future deem that "for anything to have meaning, to be important, it must have longevity and be 'permanent'." Which seems like a good conclusion considering the present gives very little permanence to anything.
"The only thing that stays the same is everything changes." (Tracy Lawrence, "Time Marches On")
To me there is no permanence in this corporeal world because nothing can stay the same. Throughout human history, recorded or otherwise, everything has changed. Languages, cultures, war, kings, lands exchange owners, and even the Earth changes shape given enough time. Even the heavens mark the change in the passing of time. Stars burn out, new stars are formed, comets zip by, and space itself continues to change with each passing moment.
For that matter, neither the Earth nor the heavens mark any life except perhaps the very, very few that become part of ancient myths and modern histories. Even then, what we pervceive as permanent will eventually be washed away, like a sand castle on the beach, because time will wear away all meaning and the future generations will be left to wonder and speculate just as we do with the Aztecs, Mayans, Egytians, Greeks, Romans, the Aryan peoples, various African peoples I don't even know the names of, and I'm sure you get the point by now. Given enough time nothing that you or I understand as being permanent will be permanent.
Perhaps my concept of the future is too broad and those that live in the past and future only live in a narrow band of the past and future...perhaps just defining their present to be a few hundred years before and after their lifetime.
Anyway, I do believe there are permanent things in the world, but I can not empirically prove it's existence. I believe that the Creator has existed for all time and will continue to exist long after this world has been erased from the drawing board. I have faith that when I die my soul (spirit) will continue to live on as a unique individual like I am now and that I will spend the rest of eternity in bliss.
Perhaps it is my belief in the spiritual that frees me from the need of permanence in this world...then again I could just be screwed up in the head. You decide.
Realize deeply that the present moment is all you ever have.
- Eckhart Tolle
Friday, July 23, 2004
GEEKY: Parallel Polarity
Parallel Polarity
Though this is not actually my work, i think this is great. Thanx to Matt at The Wayward Weblog for this gem..
Sometimes scientific discoveries are not made in the lab, sometimes they are born out of conjecture and backed up by observation alone. Thank goodness for that or I'd have never discovered the truth. I suppose many of you have thoughts on occasion that traverse the wide open possibilities of our reality, pondering classical conundrums and esoterica that on the surface seem like silly notions, so seldom do you venture much deeper. I, on the other hand, take great pride in plummeting head first into abstract thought and taking even somewhat clever notions beyond into the realm of abject absurdity.
You might think I do this with the noble intention of seeking out a laugh, something to tickle the funny bone of my family and friends or to evoke a short chuckle from me alone as I sit in a meeting ignoring everything else but my own little amusements. But this is not the case. I do have a more serious destination and I am most diligent in its pursuit. Certainly, I cannot escape the onset of laughter as I do uncover the ridiculous and bizarre. That's to be expected. It comes with the territory, and I don't mind it one bit; it soothes the soul and is the medicine for what ails you. Yet, however delightful that feeling of levity may be, it is nothing compared the utter joy felt when I finally stumble upon a revelation that is destined to rock the foundation of understanding that I and everyone else on this great green ball cling to daily.
That's why I spared no time in rushing to my PC this morning in order to craft this post, because I knew I finally had something of note that when shared with rest of humanity might just change the way we look at ourselves, the world around us and in truth the fabric of reality that underpins our very existence. So without further ado, I'll just blurt it out and then get on with the explanation and supporting evidence.
It has come to my attention, through observation, deep analysis and profound insight, that travel between parallel dimensions is not only possible, it is happening all the time. Now, before you commit me to the loony bin, hear me out so you will feel confident that you had all the facts, all the arguments and proofs.
Certainly, this might seem a big deal. Am I talking about Slider-esque travel through a vortex into a parallel dimension, where people look like us but are up to no good? Not really. I'm not talking big science-fiction Hollywood style travel, or even highly speculative jaunts through a black hole (especially since Stephen dashed my hopes on that one.) I'm talking about everyday, normal slips between realities that we experience even though we don't know that it is happening.
You see, as it turns out, the walls between parallel realities are not so thick at all. They're not even walls really, just sort of a conceptual boundary. The closer any two realities are to one another, the more they sort of touch, overlapping one another if you will. So another universe where I diverge from myself quite considerably, or apes rule the planet would actually be so drastically different there would unlikely be slips large enough to send anyone from here to there. However, realities that diverge only slightly, so subtly that they are almost identical except to the trained and diligent mind, they are in fact so close that they overlap entirely. Between these we travel constantly, slipping in and out of each without a second thought.
The real trick to recognizing that these traversals do indeed occur is to observe it in action. Take any monumental event that you and others may have experienced in the recent past. Ask each person involved to write down exactly what happened. Compare. You'll find a quite a deviation between observers. Is this because each is so inept as to be unable to recognize what you have obviously recalled as the truth? Or could it be that each has remembered the event quite accurately, yet their experiences diverge due to differences in reality at the time of observation?
This is not just hypothetical. It happens to me all the time.
Have you ever looked at a written word and thought, “that doesn't look right?” Well, it doesn't because you've shifted realities.
Have you ever started thinking something that you later find out your spouse, sibling or friend was also thinking? Without ever talking? Well, you did in a different reality.
This is why you always feel out of sorts. The rules keep changing. You're never really in the same place again. You keep slipping.
Have you ever looked at the keyboard and wondered what happened to the Quelm? I have. It was sort of a squarish letter. And this one, 'Z', I can't quite recall ever using seeing that one before.
Wednesday, July 14, 2004
AI: The Humanoid Project
The Humanoid Project
The Humanoid Project is a project being undertaken at Chalmers University in Gothenburg, Sweden. The long term aim is to build a life-sized humanoid robot which is capable of walking, navigating around obstacles, and operating completely autonomously whilst being controlled verbally. To date a 60cm tall prototype robot named Elvis has been built (see below).
The behaviours of the robot, such as walking, are entirely developed using evolutionary software. This software employs genetic algorithms to enable the robot to teach itself to maintain balance and walk. Thus adaptability is placed before precision.
Elvis, The Prototype:
The prototype robot is bipedal and has human-like geometry & motion. It has legs, arms and hands all controlled by 42 servos. Senses are provided by microphones, cameras, touch sensors.
Evolutionary algorithms and genetic programming are used in three hierarchical layers used for control:
Reactive layer - for reactive behaviours such as balancing.
Model building layer - for memories of past events, evolving models and basic control tasks.
Reasoning layer - for symbolic processing, higher brain functions such as navigation and safety.
Behaviours and Control:
Balancing - achieved through touch sensors on the feet to find the centre of gravity and two electronic gyros mounted in the head to minimize head movements. The robot is initially suspended in a safety harness with a sensor to detect total loss of balance. Evolutionary algorithms in the reactive and model building layers enable the robot to learn to walk and balance.
Vision - the robot has two CCD cameras for stereo vision. A program is evolved to build a 3D model of the environment. Another program is then evolved which uses this 3D map to build generalised representations of 3D objects such as boxes and cones.
Navigation - the third symbolic reasoning layer is used to integrate vision and walking to follow walls and avoid obstacles.
Audio orientation - 2 microphones are used for stereophonic hearing. A Genetic Programming system evolves a program to determine the direction sound and to focus the robots attention. Future work will be on separation of sound sources and recognition of commands.
Manipulation - each hand has 2 fingers and a thumb. Each is equipped with touch sensors capable of sensing forces from 10g to tens of Kg's.
Future Plans:
The prototype robot is not yet fully autonomous. It is currently controlled by a remote NT Workstation and power supply. The plan is to make it fully autonomous by including an onboard power supply and main processing unit - perhaps a handheld PC. Total weight should be less than 5Kg. The long term aim is a human sized humanoid weighing 60Kg.
Links:
The Humanoid Project: humanoid.fy.chalmers.se
AI: Computing with Leech Neurons
Computing with Leech Neurons
Scientists at the Georgia Institute of Technology in the USA have extracted living neurons from leeches and connected them, through micro-electrodes, to a computer. By stimulating the neurons in a specific way and recording their responses it was possible to get the neurons to perform rudimentary calculations such as adding two numbers.
The experimental set-up (pictured below) consisted of just a handful of neurons placed in a petri dish. Leech neurons were used because they have been extensively studied in the past and their behaviours are well understood. The neurons were not manually connected together. Rather, they were encouraged to grow and form synapses of their own accord.
This research is still in its very early stages. Professor William Ditto, the leader of the team at Georgia Tech, says that he hopes to go on to build much larger versions of these biological computers. Eventually such cultured neurons could be integrated with artificial eyes and ears to give a complete robot brain. The advantages of such biological computers would be their increased flexibility. They are not restricted by the rigid programming rules of today's computers. Instead they could work things out for themselves.
Neurobiology of Leeches:
The leech is an annelid, the biological grouping which includes earthworms. The Latin name for the well known blood sucking leech is Hirundinea Medicinalis. It is this species which is applied to wounds so as to remove possible infection.
The leech is comprised of 1 head segment and 21 body segments. The head segment contains the brain (in 2 parts, dorsal and ventral). Each of the body segments has a ganglion of about 400 neurons. These ganglion are not much smaller than the brain. Thus the total number of neurons per leech is 15,000 to 20,000.
One reason that the leech nervous system has been so well studied is that its neurons are relatively large (60µm). The morphology of the neurons is also remarkably uniform between ganglia and animals. The neurons in the head, being smaller, are less well understood.
It is remarkable that despite its relative paucity of neurons the leech is capable of such an array of movements and behaviours (body waving, crawling, swimming, shortening, foraging for food, feeding, and mating).
Leech Neuroscience at Emory University:
This research into computing with leech neurons is being conducted in collaboration with scientists at Emory university, also in Atlanta, Georgia. The focus of their research is the neural circuit which controls the leech heart. Oscillatory neurons in the 3rd and 4th ganglion are found to be coupled with the heart interneurons of the 1st and 2nd ganglion. Together these cells form the timing network which controls the rhythmic motor cells of the heart.
Links:
Georgia Institute of Technology: www.physics.gatech.edu/chaos/leeches
Emory University: calabreselx.biology.emory.edu
Wednesday, July 07, 2004
COOL: Darwinian Poetry
The goal of this project is to see if non-negotiated collaboration can evolve interesting poetry using (un)natural selection.
Huh?
Ok, here's the idea: starting with a whole bunch (specifically 1,000) randomly generated groups of words (our "poems"), we are going to subject them to a form of natural selection, killing off the "bad" ones and breeding the "good" ones with each other. If enough generations go by, and if the gene pool is rich enough, we should eventually start to see interesting poems emerge.
The cool part is that YOU are the arbiter of what constitutes "good" and "bad" poetry......
DISCUSS: A Question of Ethics
I really shouldn't cross-post like this but i MUST point you to an excellent discussion topic....check out this:
A Question of Ethics
In an older post, I reflected briefly on the idea that artificial life could be given different basic motivations than biological life. Biological life essentially does whatever it can to survive and multiply. Human society is shaped by this same motivation. One could assume that any 'machine society' formed of sentient machines would be shaped by their most basic motivations.
Since we're the ones creating them, it seems that we could choose what those basic motivations are.
Let's imagine a scenario. What if sentient machines were created, with the 'hardwired' core motivation of 'making humans happy'. That is, the most basic feeling of satisfaction these machines would feel would come from making humans happy. This would be as strong a motivation in them as we are used to seeing reproduction be in other forms of life."...
Friday, May 21, 2004
PROSE: How Perfectly Succinct...A code-hound at heart
I Cannot take the credit for this, posted by Mat on: The Wayward WebLog this could have been written by me...if you know what i mean.
Programming in the Brain
I’m staring at source code again. This time it’s not mine. I’m not looking for a flaw. I’m not trying to fix something. I’m just trying to understand; hundreds of files, thousands of functions, millions of lines. I stare at them and I read them, not line by line. I skip over those and glance at declarations. I see words stitched together into names with parameters and types. I’m not sure what they mean yet, unfamiliar with the pattern. I don’t really know what the code inside does, but that’s just details. I see the names to know who they are, where they are. I’m building a roadmap in my brain.
Soon the code takes shape to me. It might have been hours or days, but eventually it is there. I can now feel the code, a sensory perception on the periphery of what is real. I know how the code is defined, what its facets look like, where they are placed. I scroll through the files once more, seeing them again, like photographs of old friends. I look into each and see references to others. Look, there’s that same little guy. He’s over here too. I don’t know why, but now I see how. The code becomes three dimensional, linked together in a graph, woven together like a tapestry; function upon function, hierarchical and ordered.
It’s only now that I begin to understand what it is that the code actually does, a portrait forming in my mind, full and complete. I had some idea going in, but that was a base perception, a rough image painted with broad strokes. Now I see the details, the intricacy, the patterns and the truth. I walk through the code and feel it react. I know where it is going and I know where it’s been. I don’t need a machine to tell me this. It happens all inside my head as I sit in the car driving to and from work, as I shower in the morning, as I lay awake at night.
But that's just me.
Matt
Thursday, May 20, 2004
Old Brains Can Learn New Tricks
Old Brains Can Learn New Tricks
A study led by the Rotman Research Institute at Baycrest Centre for Geriatric Care in Toronto has found that older adults can perform just as well as young adults on visual, short-term memory tests. What's remarkable, however, is that older adults use different areas of the brain than younger people.
The study, in conjunction with the University of Toronto and Brandeis University in Massachusetts, is to be published in the Oct. 25th international journal Current Biology. While other studies have conducted comparisons of young and old brain activity, this is the first to focus on how the interplay of brain regions relates to cognitive functioning and aging.
"The older brain is more resilient than we think," says Dr. Randy McIntosh, Rotman scientist and assistant professor, Department of Psychology, at University of Toronto. "If aging brains can find ways to compensate for cognitive decline, this could have exciting implications for memory rehabilitation."
Ten young adults (ages 20 to 30) and nine older adults (ages 60 to 79) participated in identical visual, short-term memory tests while their brain activity was measured using positron emission tomography (PET). PET measures regional cerebral blood and acts as a marker to show which brain areas are lighting up during a memory performance task.
Participants were shown pairs of vertical grid patterns on a computer screen and asked to select which one had the higher spatial frequency. After viewing each pair, they would press one of two keys to indicate the correct grid. Researchers measured their ability to discriminate stimuli over half-second and four-second intervals.
Results show that young and older participants perform the memory task equally well, but the neural systems or pathways supporting performance differed between young and older individuals. While there was some overlap in the brain regions supporting performance (e.g. occipital, temporal and inferior prefrontal cortices), the neural communication among these common regions was much weaker in older individuals.
Older individuals compensated for this weakness by recruiting unique areas of the brain, including hippocampus and dorsal prefrontal cortices. Scientists are most fascinated by the older brain's activation of the hippocampus because this area is generally used for more complicated memory tasks such as learning lines from a Shakespeare play.
Dr. McIntosh was assisted in the study by Dr. Allison Sekuler, Department of Psychology, University of Toronto, and her father Dr. Robert Sekuler at Brandeis University.
Funding for the study was provided by the Alzheimer's Association of America, the Natural Sciences and Engineering Research Council of Canada, and Medical Research Council of Canada.
CONTACT:
Steven de Sousa
U of T Public Affairs
(416) 978-5949
steven.desousa@utoronto.ca
Wednesday, May 19, 2004
TIGGR & MIKE (AI): Artificial Intelligence - Solved!...Well Maybe. {Holographic Correlational Opponent Theory}
Read on for our continued discussion on Correlational Opponent Theory.
>> Damn you and your damn theory!
*teehee*...this is a topic that my brain has been chewing on for about 10 years....glad to see it is causing someone ELSE to "lose some sleep"
>>Ever since you told me you believe that each person simply "reacts" to his/her surroundings, I've been watching people, and thinking hard for some evidence it's not true. And so far, I haven't found it - But - I still haven't found any evidence to disprove my preferable theory that we are unique, blah blah....
AHH Yes, and that is the beauty of this "Little" theory - I have not been able to debunk it so far..however nor have I really been able to PROVE it "beyond a shadow"...thats what the whole Co-OpTheory Worksheet is for - trying to formalize the idea and get any wrinkles in the logic ironed out with some factual (albeit mathematical) data.
>>Unique in appearance - there is no question there. Unique in mind, well - yes - I believe that (tho I'm still considering the development of the mind [in relation to experiences] as people grow from being a little toddler) - unique in behaviour - well, I would say yes also, if I didn't see so many people behaving in extremely predictable ways every day. I could argue that one's determination to think for themselves creates individuality and uniqueness....But the rebuttal to that would be that throughout the course of that person's life, they have been exposed to situations, or attitudes that have given them the attitude towards thinking.
Well, this is a much more complex thought process than initially it seems....MY Argument is that human behaviour is a systematic respose to the current input stimuli that the Human receives COMBINED with ALL of the responses to ALL PREVIOUS inputs to the system....In this way, the "Learned", "Programmed" or "Aquired" behavioural mechanisms that humans develop are essentially the "Least amount of change required" to make their output (actions/Behaviours) EQUALLY OPPOSE the combined inputs (and learned adjustments to those inputs from the past)..our basic motivation then becomes Laziness.."What is the least amount of work (output/action I have to take) I have to do in order to balance this equation:
Current Inputs + Behaviour = Sum of all Inputs and outputs provided to the system over time
_IF_ perchance, my past actions created an internal "Engram" that means NEXT time the same scenario comes around I don't see it as being "out of the ordinary" or interesting (i.e If the value of my brainwaves PLUS the input scenario is CLOSE to what I remember) then the input can be considered fully learned...How then, can we "LEARN" behaviour's in such a way that we exhibit "Unique" personal behaviours? Easy...when we encounter the input stimuli, something ELSE (other than the INPUT and our OUTPUT) "unbalances" the system...for instance, if a parent ADD's input the a baby's "Processor" during and after the presentation of an input then the INPUT will NOT equal the memory encoded in the baby's brain and therefore the scenario becomes "Interesting" to the baby and the baby's mind encodes the difference between the input and the expected value back into the system so that next time the memory is closer to the inputs...
Which leads to your next comment really well:
>>Raising a child will provide some decent evidence one way or another towards this topic - how they learn, what they learn for themselves and why - for example - I have a cousin who is a couple of years old - he can open and close sliding glass doors, and understands that getting his fingers jammed would be bad - yet he has never jammed his fingers..... (I don't know if this is something his parents tried to teach him...)
YES..Spot on!...{maybe Kat and I should get onto the baby-making-bandwagon and "Build" ourselves our very own Human-AI experiment *LOL*}...i am actually looking forwad to Fatherhood partly for this (not exact) reason..i think I understand the complexity of the layering of learned experience enough to be a "Useful" father...also though, because the system is LOGARITHMICALLY complex (i.e. a previous learned response cannot be undone, it must be "learned out" and the "Whole of Brain" experiece is fully encoded - Every input and response EVER EXPERIENCED is incoded in the brain engram all at the same time)..i am aware that it is a pretty oenerous task and MY Mistakes compound once they are learned by my child......
The Sliding door conundrum.....not so hard to explain....TWO bits to it:
Going back to ..."Least amount of change required" to make their output (actions/Behaviours) EQUALLY OPPOSE the combined inputs"... If the inputs to this "feedback loop" include metrics such as current amount of pain, current amount of force being exerted as WELL as visual cues such as the location of the child's hand in relation to other objects, velocity of the door etc... then this whole "least amount of resistance" theory covers this learned behaviour by trying to balance the _potential_ for pain (or bad experience) with the current siutation...However, The child must have been made aware of their own mortality for this all to work. At some point the input to the system (Pain, blood, shouting at by mum and dad) increased above the "remembered" value and caused a learning experience....this learning experience then gets "reused" in different scenarios..HOW?...well you see this whole process exhibits a sort of INERTIA....like this: Child puts hand near something (other than a door) MUM or DAD yells at them to stop (protecting them from hurting themselves) and the Input stuation now does not match what the child expects (an INTERESTING situation occurs) so the child learns to expect shouting when their hand nears something...next time mum and dad DON't yell (situation is not dangerous)..this in turn Becomes an INTERESTING situation because the child was expecting to be shouted at when their hand goes near something but weren't (and, as such their inputs don't match their memory)....they Un-Learn the hand/shout scenario a little bit...this naturally encodes the "Middle ground" between the two scenarios and, unknowingly, has encoded the same situation with varying inputs as a Safe/Not Safe engram....As soon as the child goes to touch something and doesn't get shouted at, they LEARN the situation to be safe (because there was no shouting and the inputs _still_ do not exactly match the memory). In this way, although MUM & DAD did not "Teach" the "Doors can hurt your hand" Meme to the child, they have learned the process for evaulating safe stuations because they were told about an unsafe one (I think that all makes sense)....THIS DOES NOT WORK IF THE CHILD HAS NOT FELT PAIN! - if the child has not felt pain then there is no difference between the inputs of:
* the child's hand touching a HOT item that could hurt them (or sharp-hurty door)
* the child's hand touching a soft cushion that could not
If there is no difference in these two inputs from the child's point of view then there could be no learning...thus we can prove that in order for this theory to work, a feedback allowing data about the Host/Human/Child/Robot needs to be included as an input to the system...which is ok because this is exactly what humans have - 5 RAW sensory inputs (Smell, touch, taste, sight sound) plus multiple internal inputs (heart rate, temperature, pain, emotive response, hunger etc).
Secondly (an briefly coz this is oficcially a long-winded email now), various regions of the brain are used for differnet tasks and trained specifically for those tasks accordingly..the various different areas of the brain are then "Joined" together (at the Mendula Oblongata) and the total output is sent down the spinal cord to complete the feedback loop (action is taken, the response from the action is the next input value..e.g turn your head, the visual [and audio] input changes, potentially producing an output). Over time, (and continuously over our life), sections of the brain become increasingly and slowly diconnected from each other.... At a time near conception, our brains are almost "fully connectionist" that is; each neuron is (almost) connected to every other neuron in the brain, therefore the brain is completely aware of the other bits of the brain....as time goes by (rapidly when young, slowly as we get older) connections between nerons are "dropped" this results in sections of the brain becoming diconnected from other sections at least partially. The effect of this is that sections of the brain that are "Trained" to respond to certain inputs no longer have the "Complete" view of the world that they used to and are required to provide an output based upon on the "partial picture" of the world that it has....when many sections make "Guesses" like this, it results in novel reponses that still are applicable to the scenario they are in response to but based upon incomplete or inaccurate data and, therefore NOT QUITE RIGHT" which we see as being "Creative" or "Novel" thought....this is kinda like the "Tower Of Babel" story of how "God" introduced the Languagfe Barrier (Different languages did not exist in th ebible before this time) overnight in order to stop humans building a tower all the way to heaven...the brain, after some initial training in the "one big same language"...suddenly starts speaking different dialects in different areas and cant fully understand the dialect of other areas...this results in a kinda neural "chinese whispers" which we see as Creativity or uniqueness - hrmm maybe this is a good name for creative thought (Neural Chinese whispers)...
(Also links between neurons that have no direct link to the outside world are created over time in the brain ...that is, one part of the brain may take its input from the output of another part of the brain...so these decision making links are heirarchical not flat!)
Soooo, from 3 simple theorys (Co-Op theory + Neural Learning + Disconnectionist evolution) we get intelligence and creative thought....
The next question is, how does this apply to evolution?
Could, over time, our physiology change so that certain areas of the brain are pre-connected (or pre-disconnected) from each other to produce a more efficent brain? - is this what happened between Homo Sapiens (neanderthal man) and Homo Sapien-Sapiens (Human man)??
Could, over time, the "Driver Signal" used to start this "wave theory thing I have going" (in this case the 28Hz driver signal on the Excel worksheet) be customized to pre-contain data on it rather than being a blank driver signal...could, perhaps, we be copied with the waveform (or part of) that our parent/mother has and, therefore, could we be born with A Priori knowledge of the world?...the answer would seem at first look to be YES - after all, we are exactly the same biological species as our ancestors but we are "Smarter" or seem to be because we "Know more about more"..we "remember" what our ancestors learned to some degree...now, sure, some of this is because we have ancestors alive to change our inputs and therfore condition our responses as well as other artifacts such as the technology available today as opposed to before which change our inputs compared to those of our ancestors and, thus change our behaviours BUT, how can WE (our generation) learn everything our previous generations knew and then use that knowledge to "Create" ne knowledge and then continue the cycle?...surely this is only possible if, every generation, we Pre-Prime the driver waveform of the brain with some knowledge (that is, it is not a simple Sine wave but a complex wavelett similar to that left over after a waveform has already "Learned" something)???
Ahhh..well I would say you have just touched the tip of a 10-years-worth-of-thought iceburg...maybe I should finish this email here and continue my rant at another time....
;)
Hey, By the way, what time u knocking of today?...chances of a lift?
Lastly, I am enjoying discussing this with someone (for once) so much I have created an online blog for it..you will get an invite email soon, we can continue this thread on there (and, thus refer our notes or any links we get) from home.
The Blog will be at:
My IT Blog is at
Kat and I have a personal one at
http://tiggrlyfe.blogspot.com tho it is blank at the mo
Cheers.
T.
Welcome to #{BitKrafted}!
Welcome to BitKrafted!
Well, it has finally happened - I have finally gotten around to creating a WeBLOG to deal with all of those "Cool" techie ideas I have, or have had over the years but didn't have a place (or people) to discuss them with.
From the sumblime to the rediculous, this is where i will put it.
To newcomers who know me:
You got to this site because you were chatting to TIGGR about one of the topics on this page....so keep chatting.
To "Strangers":
TIGGR is an Eclectic 26 Software Engineer from Canberra - Australia with a keen eye for technology from "Bleeding/Clotting" edge...
An interest in AI, Cognitive Sciences, Quantum Physics - "Life, The Universe and Everything" has been the Petri-Dish for scores of ideas over the last 15 years in the IT industry; some of them need discussion; some of them need completeing; some of them need debunking......
DISCUSS, COMPLETE, DEBUNK....BITCRAFT!
All viewers:
The ideas, thoughts, research, data and other content located on this site are STRICTLY the SOLE PROPERTY of TIGGR {Wayne Lee-Archer}. New product lines, patents or concepts derived from the works or research on this site retail the intellectual property rights of the author {Wayne Lee-Archer}. Basically; I put the ideas here on this site so that we can discuss them and improve upon them..DO NOT take them as your own..DO Give credit where it is due and DO provide Trackback Links to this site of you reference it on your own and we will all get on just fine!