[Coco] the final stage of optimization

Wayne Campbell asa.rand at yahoo.com
Fri Nov 20 23:45:22 EST 2009


When I wrote DCom I used a RAM disk. I recommended the use of a RAM disk as it did speed up the program significantly. With the current setup I'm using, I am unable to setup a ram drive. The NitrOS9 disk image I'm using is not a proper system master, and I'm spending all of my available time working on this project. I just don't have time to try to figure it out right now. I'm attempting to get the copy of the OS9 system disk to boot, but for reasons unknown the screen comes up black-on-black with a magenta cursor, and the display commands don't change it. If I can't see what I'm doing, I can't work on anything. So, I make do with what I have.

The sort I'm using is based on the bubble sort, but instead of repeatedly starting at the top and working to the bottom, it souts back upward when it gets to the bottom. This way, it is constantly sorting in both directions, and uses fewer iterations of the loop. It proved to be over twice as fast as top-to-bottom-top-to-bottom.

Thanks for the tips. I've heard of mergesort, but have never seen it in operation or looked at its code. I know there were many things developed on the mainframes for dealing with massive volume sequential data, but I've never been around it. Always looking from a distance. I'll see what I can find on the web.

Wayne




________________________________
From: Gary <bear at bears.org>
To: CoCoList for Color Computer Enthusiasts <coco at maltedmedia.com>
Sent: Fri, November 20, 2009 11:20:20 AM
Subject: Re: [Coco] the final stage of optimization

On Fri, 20 Nov 2009, Aaron Wolfe wrote:

> Hi, I don't know your project well enough to be sure, but this seems
> like a situation where keeping the data in ram, or at least an index

Years ago, I wrote a database for Basic09 and the only sort I knew at the time was bubblesort -- using a ramdisk instead of the physical floppy sped it up by two orders of magnitude.

If something like that isn't an option -- since it sounds like the data is largely sequential (one long file broken into records) what about treating it like tape?  There are a number of well-understood algorithms from the 50s that might apply -- such as mergesort -- that were designed for handling large amounts of data for sorting and searching on machines with small ram spaces.

Might be worth a shot?

Peace,
Gary

--
Coco mailing list
Coco at maltedmedia.com
http://five.pairlist.net/mailman/listinfo/coco



      


More information about the Coco mailing list