When the omnipresent challenge of space saving reaches its full potential so that a file cannot be compressed any more, a new question arises: “How can we improve our compression even more?”. The answer is obvious:”Let/s speed it up!”. This article tries to find the meeting point of space saving and compression time reduction. That reduction is based on a theory in which a task can be broken into smaller subtasks which are simultaneously compressed and then joined together. Five different compression algorithms are used two of which are entropy coders and three are dictionary coders. Individual analysis for every compression algorithm is given and in the end compression algorithms are compared by performance and speed depending on the number of cores used. To summarize the work, a speedup diagram is given to behold if Mr. Amdahl and Mr. Gustafson were right. |