[Mantis-ti-discussion] A reference that may be useful ... and a problem I am working on

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view

[Mantis-ti-discussion] A reference that may be useful ... and a problem I am working on

Bernd Grobauer

there are two pieces of information I want to share with you:

1) Install scripts for mantis under docker
   I was made aware of the following: https://github.com/2xyo/docker-mantis
   I must confess that I had not heard of docker before, but
   it seems to be something I will want to catch up with fast.
   Anyhow: since this may be useful to some of you, here is the

   Maybe '2xyo' is on this mailing list? If so: thanks for
   sharing this!

2) Painfully slow import in dingos 0.2.0: I am working on it!

   As we are moving towards production with more and more objects in the database,
   problems with ill-formed queries that did not really show before are starting to
   show now. The first instance of this, an ill-formed query in the generation of
   the 'filter' box that lead to a slow load process of filter pages
   has been corrected in dingos 0.2.0.

   The _extreme_ slowness of the import (if you have lot's of objects in the database)
   as we have it currently seems to be something
   similar, since simply creating an empty PLACEHOLDER object is extremely slow. I will
   try to investigate that during the next days and let you know what I come up
   Note that this issue is not about the general problem with importing large
   amounts of data: As I stated in the documentation about what Mantis is
   and what it is not: import of huge volume of data is not Mantis's strong point -- at least
   currently it is not fit for importing huge reports (e.g. MAEC) or
   lots and lots and lots of reports.
   The problem here, as I perceive it, is that the deduplication
   that is performed when putting information into the system requires
   a lot of queries issued by Django to the database, which just takes
   time. I think this can be fixed to some extent by going in-memory for
   queries on smaller datasets that change with lesser frequency (e.g., the list of fact_terms will
   should never become huge) and using stored procedures for in-bulk insertion
   of values... If any of you has any ideas on this, please get in touch!

Kind regards,



Mantis-ti-discussion mailing list
[hidden email]