Bad Jim said:
Unequivocally I'm going to have to say "no" on this. I can hardly imagine a scenario where a compression algorithm that would make a substantial difference in file sizes for a game wouldn't come at a CPU cost that was untenable, particularly considering the fact that most consoles, despite having multiple cores, still only run at a paltry ~1.6GHz
As evidence for this you can consider the amount of time spend developing compression algorithms for video game assets vs. the amount of time spent developing caching algorithms for assets. Or to put it more bluntly, you are not going to spend 20 seconds of CPU cycles to get a theoretical maximum 10% compression ratio on a binary digital asset. It's just a flat out bad idea if for no other reason than CPU cycles need to be available on-demand in order to avoid stuttering, while graphical assets can be loaded in off-time before they are needed (ergo: you can't execute a CPU cycle before you need it, but you can load a texture map before you need to display it)
I should note here that I'm not talking about GPU based compression, like the CUDA JPEG codec, but some theoretical compression algorithm that a game developer could just ad-hoc bake into their game to reduce the size of their distribution media. That distinction probably doesn't mean much to everyone, but it's an important one to make in my eyes.