markmac99
42 sostenitori
Now with added containers!

Now with added containers!

Jul 23, 2022

Yahoo! I managed to get the distributed processing working :)


The approach i chose was to use AWS's container service (ECS). I made a few tweaks to the correlation library so that with an optonal parameter it can dump out candidate matches to files (in python pickle format). These are then distributed to containers in groups of 20, cutting runtime and cost dramatically. Previously it took about one minute per match, meaning a busy night of Perseids might take 6-8 hours (and cost $$$$). Now, the workload processes in about 30 minutes irrespective of the number of matches to check, Its also about half as costly.

Ti piace questo post?

Offri un birra a markmac99

Altro da markmac99