Showing posts with label cellular automata. Show all posts
Showing posts with label cellular automata. Show all posts

Wednesday, March 16, 2011

Is there something in-between?














Lately we have been goofing around a LOT with Cellular Automata and Agent-Based simulations.  Of course we're working on the Genetic Algorithms cases study, but some students have asked about other applications of these really interesting parts of computational science.

So today I was reading Cities and Complexity and came to the part where they shift from strict rule-based, deterministic cellular automata simulations, to simulations that include random elements.  I immediately remembered something that I had thought of a while back while working on some of Wolfram's one-dimensional CA studies (and then, for some reason, I completely forgot about it again until this morning).  Here's the thought:

When we think of a typical CA simulation, we use a single set of rules, like, say, Rule 110 for the one-dimensional CA that Wolfram had studied.  That rule is a strict one-to-one mapping of situation to response, so there is NO randomness at all. 

Naturally, one gets curious about what adding random elements would do to the patterns.  But then I thought of something else, and it really got me to thinking about determinism and randomness.  Let me just show you what I was thinking:

Rule 30 from Wolfram is the base 10 name for this binary mapping:

0 0 0 1 1 1 1 0

Now of course that tells each cell what to do (whether to be "on" or "off" in the next phase) for each of the eight possible situations it could be in. 

So, if we wanted to, we could randomize it by replacing one of those digits with a coin flip (f):

0 0 f 1 1 1 1 0

Here, in the third digit from the left, we have switched from "always turn off" to "turn off with a 50% chance. "  By that we mean something like, "flip a coin, dude, and heads you are on, tails you are off."

But then I remembered something I had toyed around with when thinking up some AI patterns and state machine stuff for a simple game I was working on.  What if, instead of something like "choose 1 80% of the time, chose 0 the other 20%" was handled differently?  What if we instead put an array of numbers into that particular position, with a string of results in it:

0 0 [ 1 1 1 1 0 ] 1 1 1 1 0

Now, instead of an 80% CHANCE of choosing 1, we simply tell the agent that she should respond with the number in the array--and she should increment by one each successive turn.  Meaning, using the example above, when the 3rd position in the array first comes up, she chooses 1.  She does that the 2nd, 3rd, and 4th time, also.  However, the fifth time she chooses 0.  After that she simply loops around and starts over at the beginning.  So you can see that she will choose 1 exactly 80% of the time, but there's no chance involved.  It's completely deterministic, and yet it takes the whole question of one-dimensional CA to a whole new level of, well, complexity.  Or does it?  I really haven't had much time to think about this in detail, but it seems strangely different than randomness. 

I guess what it really makes me think about is how attractive the finite set of 256 rules that Wolfram studied seemed to me at first.  How ... cozy it seemed that the structure was so, well, structured.  Now I'm thinking that there's an infinity of rules--all strictly deterministic--that could be looked at for good ol' one-dimensional CA. 

Time to hit the books....

Saturday, February 27, 2010

Thrift Store Find: The World of Objects



The internet really is amazing, when you think about it.

A few weeks ago I found an old VHS copy (never opened before) of THE WORLD OF OBJECTS with Philippe Kahn.  With VHS tapes going for 50 cents, I didn't even hesitate to throw this one in my cart.  Then I found it hosted on Google Video, which was convenient.

Yesterday I watched it.  What was amazing to me about it was how little things have changed when it comes to Object Oriented Programming.  The video hammers out the three main facets (Encapsulation, Inheritance, and Polymorphism) as if they are revolutionary concepts in programming.  Well, at one time they were.

My students, though, have never lived in a pre-OOP world.  For them, the usual responses range from a nonchalant "oh, okay" to a somewhat amused "well, duh!"  I remember Don Slater giving a demo of Alice 3 at Carnegie Mellon University.  He made the comment, "you know, as far as objects go, students 'just get it'."  He was right.  In fact, I've found it easier to teach objects than it is to teach functions, which is a strange twist on reality if you grew up programming BASIC on a Vic-20

This last week during our programming club at Skyline, one of my best programming students and I talked about redesigning old video game classics like Breakout using OOP principles.  We each had a piece of paper and were soon diagramming UML boxes for a Breakout clone.  It was pretty clear to me, right then and there, that a) you can't go home again, and b) you wouldn't want to anyway.

This weekend "just for fun" I'm programming the Breakout clone and also a cellular automata simulation of forest fires that I came across in a great book on computational programming.  For the Breakout clone I used some of the design that my student and I had come up with, but toned down the OOP a little (The paddle class and ball class should be quite re-usable).  But for the forest fire sim, I sat down and just started coding.  That turned out to be a mistake.  I hadn't fully planned the program on paper, and I know that the grid class I write will be used by students in the future.  So now I'm back to paper and UML diagrams, thinking through what a grid object's attributes and methods should be.  At least when I do start writing the code, I'll have a good plan in place.

Processing will be used for this, and the results will eventually appear on Skyline's Processing Playground page.