hximg - nmaekawa/hx-provision GitHub Wiki

hximg is a iiif backend service. This document details how to use hx-provision to provision and configure the many components involved. The presented setup does not intend to be comprehensive since there are many ways to have a iiif infrastructure configured, this is the one used at hx.

The following directions pick up from the setup in Home_.

disclaimer

For demo purposes only! provided to show how to setup a hximg vagrant installation and support to this repo is OUT-OF-SCOPE at this time.

vagrant instances

BE WARNED: this vagrant setup requires 7 (seven) ubuntu instances. This is to maintain parity with current production environment.

  • loris.vm, the hx iiif loris image server
  • images.vm, a reverse proxy in front of hx loris
  • hxvarnish.vm, a dedicated varnish for hx loris images
  • ids.vm, another loris image server to mock the libraries server
  • idsvarnish.vm, a dedicated varnish for ids images
  • manifests.vm, the iiif manifest server
  • mirador.vm, a mirador(iiif image viewer) lti provider

So, for the hx image service there is a loris server (loris.vm), a varnish cache in front of it (hxvarnish.vm), and a reverse proxy in front of hxvarnish to deal with ssl in prod env (images.vm).

manifests.vm uses hxprezi as iiif manifest server.

The mirador lti provider hxmirador serves a mirador instance via lti protocol.

idsvarnish.vm uses the mock ids.vm as varnish backend.

Below will start the all instances for hximg:

(venv) $> vagrant up images loris ids hxprezi mirador idsvarnish hxvarnish

Login into the boxes like below, so the ssh host key fingerprint is stored in ~/.ssh/known_hosts. This helps with ansible-playbook when installing hximg:

(venv) $> ssh [email protected] -i ~/.vagrant.d/insecure_private_key
images $> exit
...
(venv) $> ssh [email protected] -i ~/.vagrant.d/insecure_private_key
proxy $> exit
...
(venv) $> ssh [email protected] -i ~/.vagrant.d/insecure_private_key
ids $> exit
...
(venv) $> ssh [email protected] -i ~/.vagrant.d/insecure_private_key
mirador $> exit
...
(venv) $> ssh [email protected] -i ~/.vagrant.d/insecure_private_key
mirador $> exit
...

about sample images

You will have to provide some images for this install to work. Say you have a bunch of jpg to use as sample. This will work with jpegs and non-pyramidal tiffs, but the file extension has to be .jgp and .tif respectively (loris requirement). The provision playbooks expect the images to be a .tar.gz file, below is an example of gathering your sample images. Note that the subdirs for hx images and libraries images is relevant as the proxy will filter out requests that don't follow this pattern.

# say some images are in /tmp/iiif
(venv) $> cd /tmp
(venv) $> tar cvzf /tmp/images.tar.gz ./iiif

# and to fake the libraries server, some other images are in /tmp/ids/iiif
(venv) $> tar cvzf /tmp/other_images.tar.gz ./ids

# set the path for hx images in the playbook
(venv) $> vi hximg-provision/loris_play.yml
...
   - name: setup loris
       include_role:
           name: hx.loris
       vars:
           local_image_sample_path_tar_gz: '/tmp/images.tar.gz'

# set the path for fake libraries server in the playbook
(venv) $> vi hximg-provision/ids_play.yml
...
   - name: setup ids loris
       include_role:
           name: hx.loris
       vars:
           local_image_sample_path_tar_gz: '/tmp/other_images.tar.gz'
...

about sample manifests

You will have to provide some manifests that point to above mentioned sample images as well. The provision playbooks expect the manifests to be a .tar.gz file, below is an example of gathering sample manifests. Again, subdirs are relevant since hxprezi is rigid about the format of a manifest id, and quite coupled to HarvardX way to define manifests. Refer to the hxprezi repo for details.

# say some manifests (that reference your sample images) are in /tmp/hx
(venv) $> ls /tmp/hx
cellx:123456.json
...
(venv) $> cd /tmp
(venv) $> tar cvzf manifests.tar.gz ./hx

# then set the path for hx manifests in the playbook
(venv) $> vi hximg-provision/hxprezi_play.yml
...
   - hosts: '{{ target_hosts | default("tag_service_hxprezi", true) }}'
     remote_user: "{{ my_remote_user }}"
     become: yes
         vars:
            local_manifests_path_tar_gz: /tmp/manifests.tar.gz
...

provision the instances

Because this involves all these vms, it's easier to do it piecemeal. I had problems accessing github.com (of all things) and having the playbooks fail right in the beginning of the process... so, again, be warned not to expect hximg_play.yml to work with vagrant.

Run:

 # run the common_play.yml for each instance, preferably 2 at a time
 (venv) $> ansible-playbook -i hosts/vagrant_hximg.ini common_play.yml --extra-vars target_hosts=hxmirador.vm,hxprezi.vm
 ...

 (venv) $> ansible-playbook -i hosts/vagrant_hximg.ini common_play.yml --extra-vars target_hosts=loris.vm,images.vm
 ...
 # ... and so forth


 # then provision each service; order matters
(venv) $> ansible-playbook -i hosts/vagrant_hximg.ini ids_play.yml

(venv) $> ansible-playbook -i hosts/vagrant_hximg.ini hxmirador_play.yml
(venv) $> ansible-playbook -i hosts/vagrant_hximg.ini hxprezi_play.yml

(venv) $> ansible-playbook -i hosts/vagrant_hximg.ini images_loris_play.yml
(venv) $> ansible-playbook -i hosts/vagrant_hximg.ini images_varnish_play.yml
(venv) $> ansible-playbook -i hosts/vagrant_hximg.ini images_reverseproxy_play.yml

if all goes well, you should be able to see images in your browser by hitting the url for hx images server:

http://images.vm/iiif/<your_sample_image.jpg>/full/128,/0/default.jpg

or the fake libraries server, via varnish cache:

http://idsvarnish.vm/ids/iiif/<your_sample_image.jpg>/full/128,/0/default.jpg

you can then go to

http://projectmirador.org/demo

and replace an object with your local manifest

http://manifest.vm/manifests/<source>:<manifest_id>

for the example above:

http://manifest.vm/manifests/cellx:123456

Note that this author did not find an easy way to integrate the mirador lti provider in a local environment (yet). You can try to use the edx devstack container as the lti consumer, but will need to tweak the networking configs in order to get the edx in docker talking to the mirador provider in vagrant. If you figure this out, let me know!

---eop

⚠️ **GitHub.com Fallback** ⚠️