Notice: In order to edit this ticket you need to be either: a Product Owner, The owner or the reporter of the ticket, or, in case of a Task not yet assigned, a team_member"

Task #11521 (closed)

Opened 6 years ago

Closed 3 years ago

dev_4_4 integration tests rather tiresome

Reported by: mtbcarroll Owned by: jamoore
Priority: major Milestone: Testing2
Component: General Version: 4.4.8
Keywords: n.a. Cc: omero-team@…
Resources: n.a. Referenced By: n.a.
References: n.a. Remaining Time: n.a.
Sprint: n.a.

Description

./build.py test-integration appears to run for hours, even on my shiny laptop running a local server. I'm fast approaching 300,000 lines of output now, with nothing for the past half hour different from a bunch of,

2013-10-10 11:24:53,202 WARN  [me.services.blitz.fire.TopicManager$Impl] (r_Worker-0) Found no topic manager
2013-10-10 11:24:53,202 WARN  [me.services.blitz.fire.TopicManager$Impl] (r_Worker-0) No topic manager

Could the output be somehow somewhat trimmed, and real progress through the suite more clearly indicated? (How long does the full suite actually take: is there a case for separating out the longer tests, to encourage people to actually bother running them?)

Change History (18)

comment:1 Changed 6 years ago by mtbcarroll

  • Milestone changed from Unscheduled to Testing2

comment:2 Changed 6 years ago by cblackburn

The full suite can take around two hours due to some long running python tests. These can be excluded when using build targets as these tests have markers attached.

./build.py test-integration -DMARK="not long_running"

The documentation change exposing this under review

comment:3 Changed 6 years ago by jamoore

Colin: a late comment -- I wonder if MARK should be PYMARK since it will have no effect on Java tests. The only way to make this work similar for both would be via -DGROUP=short_running and have @Test(groups="short_running") or @pytest.mark.short_running on EVERYTHING in both. (Sadly)

comment:4 Changed 6 years ago by cblackburn

I'm happy to change to PYMARK, it does make sense. Surely for Python we only need the current single marker as:

./build.py test-integration -DMARK="not long_running"
./build.py test-integration -DMARK=long_running

are mutually exclusive.

Sorry: scratch that I see what you mean.

Last edited 6 years ago by cblackburn (previous) (diff)

comment:5 Changed 6 years ago by cblackburn

Do we want to hang anything else on this ticket is the existence and documentation of a mechanism to exclude long running tests sufficient for this to be closed?

comment:6 Changed 6 years ago by mtbcarroll

Hmm, it depends if without the long-running tests we actually get more obvious progress. The more-than-a-half-hour of only topic manager warnings is probably not a good thing, if running the integration tests is producing over a quarter million lines of output?

comment:7 Changed 6 years ago by cblackburn

Could you re-run the tests with the long running tests excluded? If that reduces the time considerably then this ticket might serve to address the logging which I think arises in the blitz integration tests? Locally I get this:

./build.py test-integration -DMARK="not long_running"

...

BUILD SUCCESSFUL
Total time: 35 minutes 46 seconds

and certainly not 250,000 lines of output.

Last edited 6 years ago by cblackburn (previous) (diff)

comment:8 Changed 6 years ago by mtbcarroll

Re-running the tests as suggested does indeed produce much of the unfortunate output. An expedient temporary fix would be to set the log output for Blitz integration tests to omit such messages from that class.

Better would be for the integration tests to get enough of the actual server config that bin/omero admin start picks up, that the topic manager can actually be found.

comment:9 Changed 6 years ago by mtbcarroll

(Part of my concern is that, as we don't usually see these messages from a running server, I wonder if the Blitz integration tests are testing the server in an environment that is enough dissimilar from a real deployment that it raises doubts about the tests' validity.)

comment:10 Changed 6 years ago by mtbcarroll

Okay, I tried being more patient: after over fifteen hours on a fast machine with a local server, the integration tests are still running and I have well over a million topic-manager-related lines and they're still coming. Nobody's going to be running these tests while they're in this state.

comment:11 Changed 6 years ago by jamoore

@mtbc: thanks for the update, and I agree that the tests aren't usable in that state (and can be disabled for the moment). But as always, knowing what it was *doing* during that period would be invaluable. Is it doing nothing but sitting (i.e. shutdown didn't clean up a thread)? Is it a single test?

comment:12 Changed 6 years ago by mtbcarroll

I left dev_4_4's ./build.py test-integration running last night and it completed! I'll give it another go next week unless somebody else beats me to it.

comment:13 Changed 6 years ago by jamoore

Possibly use a testng listener which prints tests+times as executed so we could graph the progress?

comment:14 Changed 6 years ago by mtbcarroll

That might be nice; does anyone have any idea what I should put in where to make this happen?

comment:15 Changed 6 years ago by bpindelski

https://groups.google.com/forum/#!msg/testng-users/3rAt0p0Qo64/2XPA9_98LckJ has some interesting ideas. I don't think there is a listener implementation for time measuring. Writing one shouldn't be hard to do. I wonder if Bio-Formats has already a performance listener...

comment:16 Changed 6 years ago by mtbcarroll

Well, the last two times I ran them, they completed within ninety minutes. Perhaps something changed recently.

comment:17 Changed 3 years ago by sbesson

mtbcarroll: as you have been running the Integration tests extensively for the regions work, is this still valid or should we close?

comment:18 Changed 3 years ago by mtbcarroll

  • Resolution set to fixed
  • Status changed from new to closed

Perhaps thanks to work on,

is there a case for separating out the longer tests

I think we can probably close.

Note: See TracTickets for help on using tickets. You may also have a look at Agilo extensions to the ticket.

1.3.13-PRO © 2008-2011 Agilo Software all rights reserved (this page was served in: 0.85951 sec.)

We're Hiring!