Research Platform Services Wiki

If you're a researcher, we'll help you do stuff.

User Tools

Site Tools



Service Functionality


The current UoM service offering utilises some of Mediaflux' capabilities (functionality will grow over time). UoM currently offers a turn-key project capability:

  • Projects
    • A project namespace (think of it as a cloud-based file system) where you and your team can securely store and share your data collection
    • You can structure your project namespace however you wish
    • You can associate discoverable meta-data with the structure and with the files that you upload
  • Authentication
    • You can login with your institutional credentials or with a local account
    • For University of Melbourne staff and students, you can login directly with your institutional credential
    • For users from other Institutions that are members of the Australian Access Federation uni can login with your institutional credential via the AAF
    • For other users, you can log in with a local account created for you
  • Authorisation
    • Whatever account you login with, it must have roles granted to it (executed by the Mediaflux support team) to enable your account to be authorised to access resources
    • Standard roles per project are created (admin, read/create/destroy, read/create, read) which can be assigned to project team members
  • Data Movement
  • Access Protocols
    • Projects can be accessed via
      • HTTPS (Browser-based access and various Java clients)
      • SMB (i.e. a file share)
      • sFTP (e.g. FileZilla, CyberDuck)
      • NFS (only with discussion with ResPlat Data Team)
  • Encryption (discuss with ResPlat Data Team)
    • HTTPS protocol only : Files can be encrypted at the storage layer (protection against unauthorised access to system back end only). Other protocols will be available in the future.
    • Selected meta-data can be encrypted (protection against unauthorised access to system back end only)
  • Scalability
    • The primary system consists of a controller node (handling data base transactions) and 2 IO nodes. The IO nodes are used to actually move data to and from the storage. More IO nodes can be added as needed.
    • The underlying storage is provided via a highly scalable CEPH cluster. More nodes can be added to the cluster as needed.
    • The combination of the scalable Mediaflux cluster and scalable CEPH cluster provides a very extensible environment as our data movement needs grow
  • Redundancy
    • The primary controller server is part of a High Availability pair. If one fails, the service can be moved to the other.
    • The data base on the primary controller server is backed up every three hours. This means that if the DB should become corrupted, the gap is 3 hours in which data may have arrived which is no longer known.
    • Your data are replicated to another Mediaflux (DR) server in a separate data centre (Noble Park)
      • Data that have been destroyed on the primary server may be retrievable from the DR server (administration task)
data_management/mediaflux/vicnode_mf.txt · Last modified: 2019/06/20 14:32 by Neil Killeen