My program is a shell program, working under most types of Unix. I use Mac OS X.
It's reading data from text files. After reading files, the textkorpus is completely in memory.
With special commands I can navigate in the text data.
This works fine.
Only thing: My data structures consume too much memory. A total of 756kB of text data converts into more than 12 MB in memory!
This limits my program to read large amount of text files.
I'm using a Hashtable and an AVL tree. And I store information redundantly. But even though the prog should not consume so much memory.
This is why I need help. Somebody that knows how to use gdb or similar tools to track down where the memory consumption really happens and fix that.
Perhaps a compressed form of storing the strings could be thought of. Perhaps a different hashtable is needed. I don't know.
I provide the source as a zip file. Within there is a directory "amazon" that contains the 756kB text data.
Additional there is a [url removed, login to view] describing how to start the prog.
If it's possible to bring the memory consumption down to let's say 4 MB or lower I would pay an extra bonus of $500.
Thank you guys.
All the best