I apologize if this has already been talked about somewhere.
I am working on a project where I need to provide a graph to the frontend as quickly as possible (Memgraph has already been a tremendous help since I originally started with the data in Postgres, thinking ‘How long could it take?’)
One of my sample queries returns a map containing 5.5k nodes and ~33k relationships. Running this query in Memgraph Lab takes about 0.3s. At least that’s what it tells me, but I believe it, since the visualization starts pretty much right away.
Unfortunately, when I try to run the same query in other ways, it takes much longer.
Using mgclient from python, I get this from the profiler:
ncalls tottime percall cumtime percall filename:lineno(function)
1 3.736 3.736 3.736 3.736 {method ‘execute’ of ‘mgclient.Cursor’ objects}
Using mgconsole it provides this:
(round trip in 3.715 sec)
Using the mgclient library in a little c++ program based on one of the examples I get:
time to connect: 0.10363
time to execute: 0.00786637
time to fetch: 3.7809
Fetched 1 row(s)
total time: 3.90357s
The total times for all the things I tried make sense, since they should all be using the same library, but I don’t really understand how Memgraph Lab seems to be so much faster. Is it just that Memgraph Lab is running on same machine as Memgraph, or is there something else at play that I’m misunderstanding? Any thoughts on speeding up the query/fetch ?
My last reply hasn’t been released by the anti-spam bot yet, but I did some more poking around.
Since it was in the last reply, here is the query again:
MATCH p=(n: MGPerson {match_name: "jonqpublic"}) - [l:CRAB *..1] - (m) - [r:CRAB|FISH *..1] - (s)
with project(p) as f
return f;
I tried profiling the query from a few different places: Memgraph Lab, mgconsole, and Python. The results where all about the same. This was the output from the Python test:
* Produce 2 2.329103 % 2.754256 ms
* Produce 2 2.953431 % 3.492548 ms
* Aggregate 2 40.975408 % 48.455036 ms
* ConstructNamedPath 93605 38.071282 % 45.020792 ms
* EdgeUniquenessFilter 93605 4.953313 % 5.857488 ms
* ExpandVariable 93786 10.669720 % 12.617364 ms
* ExpandVariable 182 0.037080 % 0.043849 ms
* ScanAllByLabelPropertyValue 2 0.010545 % 0.012470 ms
* Once 2 0.000118 % 0.000140 ms
total time was 118.25394299999999 ms
However, when I actually execute and fetch from Python and mgconsole, it still takes 3-4 seconds to get the results. I tried copying the database to a docker instance on my local machine, but there was no change.
So now I’m guessing that the huge time difference is coming from some part of packing up the results and sending them out.
Hi @Canso, this is strange, and I am not sure why that is happening. If you still did not manage to resolve it and this is a blocker for you, we can jump on a quick call to see what’s happening.