I'm viewing a large set .log files which have come from a Trillian chat client. They contain a wealth of chat logs, however they are proving very difficult on the eyes to read due to the use of ASCII encodes to replace all the special characters, spaces etc.
By that I mean instead of a space there is %20, an exclamation mark is %21 etc etc
If it were only a single document I'd be happy to simply use find and replace to get rid of all the code, but there are hundreds of logs here.
Is anyone aware of a tool that can be used to view .log files and decipher/replace the code automatically?
Could you not export them all, use notepad++ to open them all at the same time and do a find and replace across all the files?
The find and replace option works, but you still have to identify and enter every single coded character and it's replacement individually don't you?
I'm not intimately familiar with notepad++
I found a solution that seems to have worked quite well.
Copied out the entire 'trillian' folder from program files, ran trillian and registered with a dummy email account on my test machine. Copied the user profile files from the original over the top of mine, then open Trillian and view chat history.
It's at least allowed me to view the files in plain format so I can search them easier to see if there is anything of interest.
Someone else had posted the Notepad++ option for a similar issue with Chinese characters and URLs that might work for you. You don't need to do a find/replace, under Plugins there are a number of decoders and converters that should help. URL Decode is an option under MIME Tools and works well.
I don't have any Trillian log data to test but it's easy and worth a try at least.
Jamie
Thanks Jamie, turns out Magnet IEF knows just how to deal with these logs 😉
If you didnt have a license for IEF and had a bit of perl knowledge you could utilise the perl library "URIEscape"
basically
foreach line run "uri_unescape(line)" and then print it out