[Coco] [Color Computer] Re: the final stage of optimization

Wayne Campbell asa.rand at yahoo.com
Mon Nov 23 22:23:44 EST 2009


James (it's nice to know your name),

The program is a decompiler, unpacker, decoder, however you want to say it. Basic09 compiles into intermediate code, and I am using that to reconstruct the source code.

I am still working at understanding exactly what a hash is, but I think I get it. I was thinking about an idea to create an array for the literals that included a field that would contain the numeric value of a string. That value would be the total of the numeric values of each character in the string. I'm not sure if that's what you mean by "packed ascii" or not, but it sounds like it might be the same thing. Would it be feasible, though? What are the chances that 2 different strings, even of differing lengths, could have the same value? I haven't started working on testing that idea yet, but I'm planning on it. If I could do that, then I could make the array for the literals only include 2 fields, one byte and one integer. Of course, dealing with real values would still be a trick, since a real uses 5 bytes, and I'm not sure how to reduce that to 2.

Wayne




________________________________
From: jdiffendaffer <jdiffendaffer at yahoo.com>
To: ColorComputer at yahoogroups.com
Sent: Mon, November 23, 2009 5:01:19 PM
Subject: [Coco] [Color Computer] Re:  the final stage of optimization

Wayne, 

Just so you know, I'm not sure exactly what your application is supposed to do (compiler? assembler?) but it sounds like you are on the right track.
Without knowing more about the application I'm guessing a little.

Anywho...

If you load everything into memory it will definitely be faster but then you must have more memory than the maximum data size.  

However, if you can create an in memory index with a subset or hash of the target data, you should be able to speed things up dramatically.  (this sounds like where you were going)

This is why you may see two files for one table in a database.  One is the index and one is the data. 

There are many different ways you can implement a hash but a simple way might be a 16 or 32 bit number that is made up of packed ASCII characters.  Then you can compare in memory very quickly on the indexed fields and only need to load records when you get a match against the hash.

You could also cache recently loaded records to reduce disk reads.  The cache would tend to contain records where there is a hash match since those are the ones you need to be read the most.

To get around the dynamically sized array limitation, just make the array as large as possible and maintain where the current end is.
To insert new data just increment the end of array variable and stick the new data on the end.

Since your array will contain multiple values (a data structure?) I suggest not manipulating it directly when sorting, but instead create a 2nd array that holds array indexes to the 2nd array.  Then just sort the 2nd array based on the content of the first.

Basically, the 2nd array has pointers to the memory data which contains the hash and pointers to the disk data.  

I'm thinking you will be pushing the limit on RAM which is why early compilers did things in multiple phases with a different application for each phase of the process.  


James

----quote----------
I have come to the conclusion that fast and disk files do not belong in the same sentence together. "fast disk search" is an oxymoron.

I was going over the post from jdiffendaffer concerning searches. I knew that I was getting the entire record before comparing the fields, and thought, "What if I only get the fields that are to be compared? Surely getting 1 byte and 1 other field for comparison is faster than getting the whole record. Then, if a match is found, I can seek to the correct position and update the reference count." Well, the extra seeks involved in getting 2 fields of a record from a disk file slowed the program down alot. Decoding createmsg went from 2:10 to 2:49.

It looks like the only way to speed the searches and sorts up is to find a way to deal with them in memory, at least the parts that need to be compared. I would use arrays, but there is a fundamental problem. Differing sizes of arrays. One procedure may contain 300 variable references, while another contains over 2000. One may contain no line references while another contains over 200. And the literals? Decode (before the string optimizations) contained 549 unique literal references, and over 1000 total. I wish there was a way to dynamically add to the elements of an array, but there isn't. Because of this, I'd have to establish arrays large enough to handle the largest number of references that could be contained in a procedure.

I have calculated that, even if the arrays only contain the fields for comparison, the variable reference array would have to be 600 minimum elements, each element composed of 1 byte and 1 integer, for a total of 1800 bytes. The line reference array would be structured similarly, but wouldn't need more than 300 elements, since I have yet to see a procedure with more than 150 line references, but that may not be the max I could run into. So, that's another 900 bytes. The literals references would need 2 bytes, 1 integer, 1 real and a string of 80 per element, and (based on decode) I would need at least 600 elements. That's 89 bytes times 600 elements, for a total of 53,400 bytes.

Add all three together, and you get a total size, just for the arrays, of 56,100 bytes of data space. I don't think OS9 or NitrOS9 will allow data of that size in memory.

You may ask, why do you need to count all those literals anyway? I knew, back when I wrote DCom, that I needed to find a way to correlate variables passed as parameters to the procedures they are passed to. While this is not necessary to the idea of reproducing the source code of any one procedure, it would help with figuring out how your procedures were related.

Named subroutines are the easy ones. The procedure name exists in the subroutine section of the VDT. If you used a string variable to name the procedure to be called, all you get is a reference to a string variable. Without knowing what the name of the procedure is, it's impossible to match parameters between them.

In addition, counting the literals can help show you where you can optimize the literals and reduce code size. While I still haven't gotten to the point where I can say literal X contains the name of the procedure called in RUN statement Y, I am getting closer to that mark.

I can probably put sorting the literals (and dealing with that association) in a different program, but I still have to be able to identify them, and doing it while going through the first pass through the instruction code is the best time. But maybe, for the above reasons, it isn't. Maybe I should put literals on the shelf for now and concentrate on getting decode to correctly identify everything else.

Wayne 


--
Coco mailing list
Coco at maltedmedia.com
http://five.pairlist.net/mailman/listinfo/coco



      


More information about the Coco mailing list