[Coco] Emulator

Wayne Campbell asa.rand at yahoo.com
Wed Sep 9 15:58:53 EDT 2009


I used a RAM disk all of the time when I had my 512K CoCo3. I had it set to 192K in size, which was sufficient to make sure all of the work files would have enough space. I recommended, in the documentation to DCom, that a RAM disk be used, if possible, for the purpose of speeding up the file access.

Unlike your genealogical database, which might grow in size over time, but not necessarily be deleted between runs of the program, DCom recreates its work files each time it runs, because it is decoding I-Code, and it doesn't know if it is the last module decoded. Each module is different, so the data files for each module had to be recreated.

I used a form of indexing in the files. Each file contained an integer at the beginning of the file that contained the number of records in that file. This was how it knew where the end of the file was for adding records. Each record was a record variable using a user-defined TYPE to define the fields. Finding a specific record was as easy as:

SEEK #filePath,(currentRec-1)*SIZE(recVar)+2

The reason for seeking to currentRec-1 is that the first record is at position 2 (0+SIZE(numRecs)). If I used currentRec instead, and currentRec was equal to 1 (the first record), seeking to currentRec*SIZE(recVar)+2 would position the pointer at 1 (currentRec) * SIZE(recVar) (let's say the record is 40 bytes, so *40 = 40) + 2 = 42. Not the right position for record #1, but is the right position for record #2.

Unpack still uses this form for record files, but has major differences. First, each work file is created and identified as to which module it was derived from, so I only have to delete the file if the same module is being decoded. Second, the files are all different from DCom 's style, and there are fewer files.

You can tell the difference when a procedure has many variables. One program I am testing with has over 150 total program variables defined. Including the Mirrors (my term for the copy Basic09 creates when using the variable being assigned a value in the calculation for that value), this procedure has over 500 references in the instruction code. Of those, over 200 of them are unique. As the variables file grows, it obviously takes longer for unpack to a) determine if that reference has already been identified, b) if it hasn't, once it's been identified, search the variables file again to see if a mirror (or an original variable if it is a mirror) exists, and c)  add the new record.

DCom itself has over 800 references in the instruction code for variables. Even with a RAM disk, DCom took about a half-hour decoding itself. Unpack is faster, but by the time I'm through with it, the disk operations that are needed to sort the files that need sorting, modifying fields that need to be modified, tieing everything together and producing the correct output could make it as slow overall as DCom. It would be much faster, I believe, to be able to load the file data into a buffer and manipulate it there.

I never knew there was a way to check and see which CoCo you were running. I mean, yea, 64K or greater memory = CC2 or 3, but even a CoCo1 could be upgraded to 64K. Perhaps the ROMs have the info needed to determine which CoCo is being used?

Wayne


      



More information about the Coco mailing list