Doing it better 1999-04-16 a ### > > instead it joins the queue at its offerer's inbox. > > Surely offeree's? > > > An Agent can wait for a buffer to arrive in its inbox. > > Ah, no. In that case the language is confusing. I'd expect a buffer to end > up in the inbox of the acceptor, given the language. Ah, I see. I think you answered your own question here, right? I chose the language because it is offering a service. An agent offers to receive orders. The buffer does not embody the service, only the data, which flows the other way. But actually, if the service returns a result, then the words are the wrong way around for the return transaction, so you have a good point. A better terminology would be based on the data rather than the politics. I think 'send' is the obvious replacement for 'accept', but would should I use for 'offer'? > > Waddya think of it all? Source code for interpreter (basically one big > > switch statement, 112 very similar cases) is available on request. > > Interesting. I don't think the source would tell me much interesting. I'd > like to see a spec. I'd be interested, given the three styles I have so > far for Mite, to see what you come up with. Well that's really why I offered you the source code: it forbids (nearly) everything which is not in the spec. The only exception is that it ought to undefine the flag bit at a branch and at a label. To tighten this up, I would have to introduce a new flag bit which indicated whether the original flag bit was valid, and I would have to introduce an instruction to represent a label, neither of which I'm prepared to do. Alternatively, I could have a verifier, which would not be readable. So maybe I should do a formal spec as you suggest. Other stuff: The interpreter simulates pre-emption. This is modelled in Java by giving Agents a run(int timeslice) method, which does the business (number crunching, sending and receiving messages etc.) and returns either when the timeslice runs out or when the business is finished. The timeslice is measured in virtual clock cycles, of which there are about 10,000 per millisecond (to the nearest order of magnitude!). This decision is partly to make the interpreter easier to write, and partly because its nice if the timeslice represents an amount of computation which is independent of the physical machine. The interpreter overrides the run() method. It records the value of pc when it is called. Every time it performs a branch, it subtracts the old value from the current value to determine how many clock ticks it has used. This is a relatively insignificant overhead to support, and it gives pre-emption accurate to about 20 virtual cycles, typically. This is a great idea until you want to write the standard libraries. These typically involve writing Agents which override the run() method with calls to the Java libraries. Unfortunately, the Java libraries don't return until they are finished, so the timeslice is meaningless. To avoid bad scheduling behaviour, I time all such calls using Java's millisecond timer, and convert to clock ticks using the above estimate of 10,000 cycles per millisecond. Each Agent keeps a debt of virtual cycles, and the run() method returns immediately as long as it is in debt. This gives the correct amortized behaviour, which ought to be good enough. I'm trying to write a screen device at the moment. I think it's just going to be a bitmap in a window, which responds to various low-level drawing commands. I'm having difficulty accounting for all the processor time, because Java is too clever by half, and has separate redraw threads and event handling threads, in addition to my thread of computation. It also appears to draw the bitmap in the wrong place, and allow the window to be grown to be arbitrarily big, neither of which is helpful. I'd better ask Matthew. Of course, I could just switch to ARM code, but I'd really like a portable version first. Anyway, it looks like I will be able to start experimenting with user interfaces, high-level languages and schedulers soon. Alistair