A model of parallel computing in which processes that have been spawned dynamically turn into data upon completion, and data may be stored in tuples in one or more shared Tuple Spaces. A process may add tuples to a Tuple Space, or copied/removed by using a pattern tuple to perform an associative lookup. Generative Communication is usually talked about as an alternative to Message Passing.
Is this like Blackboard Metaphor? Yes, very similar.
"Processes that have been spawned dynamically": I interpret this to mean: "Processes that have been started by some other process."
I would think that includes every single process running on your computer, save the "init" process, which starts all other processes.
I'm not sure what it means for a process to "turn into data upon completion." Do you mean: An image of the processes instruction code is turned into data, (like a Core Dump), and then that floats in the Tuple Space? Or do you mean: The process' return value is stored in the Tuple Space? Or: something else?
Then it seems to me that you mean:
While a process is running, it can participate in the Tuple Space. For example: It can add to the tuple space, it can query the tuple space, it can remove from the tuple space, and it can change the tuple space. (This part I believe I understand, and it makes sense to me.)
I don't understand why it's called "Generative Communication." I wish there was some explanation of that.
I have found a "Generative Programming" wiki: www.program-transformation.org
Generative Communication and Generative Programming are two completely distinct things, as different as a "memory heap" and the heap data structure.
The easiest way to understand generative communication is in contrast to the more common "message passing" system.
In message passing, object A wants to communicate a piece of information to object B. So it packages that information in an agreed-upon form, and then tells object B, "please receive this message". This implies that A knows of the existence of B and its willingness to receive the message, and assumes that the lifetimes of A and B overlap. If object A doesn't specifically know about object B, some systems allow it to broadcast the piece of information to all other objects, but this still assumes that the lifetimes of the sender and recipient overlap.
In generative communication, object A emits the piece of information into a shared space (which is essentially message passing to the shared space). Other objects can then observe this information, which is not removed from the shared space unless another object does so explicitly. The source and destination objects' lifetimes don't need to overlap: Object A can cease to exist before an object that cares about the information is created. It's called "generative" because the existence of a quantum of information isn't just a side-effect of a message-passing mechanism; instead, the information created by object A is an entity in its own right, with a well-defined lifetime.
Generative communication is a mechanism by with the blackboard metaphor is implemented, but it's not limited to that use. Also, while generative communication was pioneered by the Linda project and its Tuple Space research, it's not necessarily tied to a Tuple Space implementation--any searchable data store will do.
Allow me to fill in based on my research, and let me know if I understand right:
The idea seems to be that you keep a big gigantic graph, that we call a "Tuple Space."
In the most general sense, not a graph - just a shared bag of tuples. Any semantic 'edges' are logically imposed by the interpreting programs, not by the space itself. Perhaps you're thinking of an Object Space?
This "Tuple Space" represents pretty much everything that your program (or family of programs) knows.
Not at all. Most of what your family of programs knows is embedded (a) in the program logic that ultimately creates new tuples or has side-effects, and (b) in the pattern-matching algorithms for identifying tuples you can operate upon. Neither of these are represented in the Tuple Space. If they were, you'd have something more than just a Tuple Space.
So, let's say you get a web request.
Suddenly, in your Tuple Space, the following might appear:
(Node "Web Request") -- (Edge "With URL") -- > "http://c2.com/cgi/wiki?GenerativeCommunication"
(Node "Web Request") -- (Cookie) -- > "some cookie string"
(Node "Web Request") -- (Date) -- > (some date node)
(etc., etc., etc.,.)
Now, I believe the idea of Generative Communication, is that this "communication" (this injection of tuples into the Tuple Space) causes a sort of chain reaction. Because, somewhere, there's a trigger. There's a sort of trap that has been laid. The trap said: "Whenever there's a match for (Web Request)--(With URL)-->(foo), start up this program."
Traditionally, you have a bunch of programs periodically polling the Tuple Space with pattern-match algorithms. However, it wouldn't be infeasible to perform pattern-match based triggering of messages to initiate a procedure. It's considerably more complicated, though, so you might need to use the Tuple Space to register your interest with a dedicated service that does the pattern-matches and callbacks.
The program then makes a Response node in the Tuple Space, and attaches a bunch of data to it.
Perhaps the last thing the program does, is say: "(Web Response)--(is complete)-->(true)," which then causes another program to start- the Web Server.
The server is raised, takes the web request and web response pair, sends off the response to the Inter Net, and then removes both the the request and response data from the Tuple Space.
The act of raising programs may be the "generative" part here. The communication part may be- well, the entire Tuple Space is a gigantic communication of state. If you ever need some piece of information, you just look it up in the Tuple Space.
I don't believe so. The 'generative' is in reference to the putting-tuples-into-the-Tuple Space - i.e. the tuples are a 'product' that was 'generated' by the process. The 'communication' is in that other actors are making queries and reading the responses. This doesn't necessarily involve raising programs; programs are allowed to come along at their own time to do the difficult work of making queries and filtering through the set of response tuples (which might or might not be ambiguous depending on the structure of the tuples and the complexity of the query language and pattern-matching provided by the service that provides the tuple-space) to do some work.
Am I understanding correctly? Is this the right idea?
-- Lion Kimbro
Embedding pattern-recognition for individual-tuples and (especially) groups-of-tuples and temporal-tuple-associations (e.g. 3 tuples with properties x,y,z added in any order within 3 minutes of one another), along with automated callbacks (or process-spawning) when the conditions are met to execute useful work, would be a VERY considerable enhancement to the basic Tuple Space and Generative Communication. As would be transaction support or ACID properties; a Tuple Space is Shared Memory in all the relevant senses, and doesn't save you on concurrency issues; you still need to use transactions (at least minimal ones) if you're going to have considerable concurrent read/add/delete operations upon the space (at least if you wish to avoid such things as two different programs reading one tuple, deciding they want to remove it and do work on it, then both removing it).
See original on c2.com