c# - Trying to optimize I/O for MongoDB -
i have updater script runs every few hours various regions on gaming server. looking run script more , add more regions. ideally love spread load of cpu , i/o evenly possible. used run script using mysql, website uses mongodb everything, kinda made sense move updater scripts mongodb too. having high i/o spikes when mongodb flushes of updates database.
the script written in c#
, although don't think that's relative. more importantly doing 500k 1.2 million updates each time 1 of these scripts runs. have done small optimizations in code , indexes, @ point stuck @ how optimize actual mongodb settings.
some other important information this
update({'someidentifier':1}, $newdocument)
instead of this:
$set : { internalname : 'newname' }
not sure if lot worse in performance doing $set
or not.
what can try , spread load out? can assign more memory vm if well.
i happy provide more information.
here thoughts:
1) explain performance concerns.
so far can't figure out issue or if have 1 @ all. far can tell you're doing around gb of updates , writing gb of data disk... not of shock.
oh , damn testing - not sure if lot worse in performance doing $set or not.
- why don't know? tests say?
2) check see if there hardware mismatch.
is disk slow? working set bigger ram?
3) ask on mongo-user , other mongodb specific communities...
...simply because might better answer there lack of answers here.
4) consider trying tokumx.
wait what? didn't accuse last guy of suggesting spamming own product?
sure, it's new product that's been newly introduced mongo (it appears have mysql version bit longer), fundamentals seem sound. in particular it's @ being fast of not insertions, updates/deletions. not needing go , make changes document in question - store insertion/update/deletion message in buffered queue part of index structure. buffer fills applies these changes in bulk, massively more efficient in terms of i/o. on top of that, uses compression in storing data should additionally reduce i/o - there's physically less write.
the biggest disadvantage can see far best performance seen big data - if data fits ram loses btrees in bunch of tests. still fast, not fast.
so yeah, it's new , not trust without testing, , non-mission-critical stuff, might you're looking for. , tbh, it's new index/store sub-system... fits bill of being optimisation mongodb separate product. since index/storage systems in mongodb have been bit simple - 'lets use memory-mapped files caching' etc.
Comments
Post a Comment