User Story #2934 (closed)
Opened 14 years ago
Closed 13 years ago
Big: Compression
Reported by: | jburel | Owned by: | |
---|---|---|---|
Priority: | critical | Milestone: | OMERO-Beta4.3 |
Component: | General | Keywords: | n.a. |
Cc: | dzmacdonald, jamoore, cxallan | Story Points: | n.a. |
Sprint: | n.a. | Importance: | n.a. |
Total Remaining Time: | n.a. | Estimated Remaining Time: | n.a. |
Description
Define and explore compression strategies for large images.
Change History (4)
comment:1 Changed 14 years ago by omero
comment:2 Changed 14 years ago by jburel
- Milestone changed from Unscheduled to OMERO-Beta4.3
comment:3 Changed 13 years ago by cxallan
- Summary changed from Big Images: Compression to Big: Compression
comment:4 Changed 13 years ago by cxallan
- Resolution set to duplicate
- Status changed from new to closed
Closed in favour of the more overarching #3278.
Note: See
TracTickets for help on using
tickets.
You may also have a look at Agilo extensions to the ticket.
Sorry if I chime in here, feel free to delete this comment if you don't find this useful. Compression is going to put heavy load on the server, does de-duplication looks reasonable ? With the ZFS filesystem you get data-deduplication at block level natively. Illumos, Solaris and Nexenta Core Platform have ZFS support onboard, FreeBSD has it (not dedup yet) and Linux is following.