I've released (on PyPI) versions of RunSnakeRun and SquareMap that support Meliae dumps... but I'm not really satisfied with the functionality. Basically the dumps seem to have 2-3MB of un-accounted-for objects in them. In my trivial test that's about 10x more memory than can be reached from module references. This small dump of ~13,000 objects has ~2800 objects that are not referenced by any other object at all, and ~10,000 which do not seem to be referenced from the modules
Even if all the objects were accounted for, the performance is unacceptable. I wrote a custom json parser, and did quite a bit of work getting the post-processing sped up, but it still takes a ridiculous amount of time to load and when it's up the various views are glacial due to the sheer numbers of objects. I could filter out a lot of the objects in the list-views, but the square-map hit-testing is unreasonably slow as well (it shouldn't be, really, it uses a tree-map, but it is).
All those issues seem like solvable problems, but I've spent too long on this little excursion already, so I think for now I'll just leave it as is and see if what's there is useful enough for people to spend more time on it. If you use it, let me know and I may decide to invest a bit more time into it.