Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Content Search UI recommendations #9

Open
tomcrane opened this issue Feb 5, 2018 · 4 comments
Open

Content Search UI recommendations #9

tomcrane opened this issue Feb 5, 2018 · 4 comments

Comments

@tomcrane
Copy link
Collaborator

tomcrane commented Feb 5, 2018

See issue:

UniversalViewer/universalviewer#529

@tomcrane
Copy link
Collaborator Author

(The above issue superseded by UniversalViewer/universalviewer#548).

The original had some UI changes to zoom behaviour that could be made prior to UVCON.
Issue 548 has the remaining (and significant) UI changes.

@tomcrane
Copy link
Collaborator Author

I have prepared some comments; they are in the "Search" section of this document:

https://docs.google.com/document/d/1YJGne0JK4t_5ygC7wHvZrngw2TjTKHt8-Iq__L4Xklo/edit#heading=h.phxwno6lkxjv

@tomcrane
Copy link
Collaborator Author

What is the relationship between this user story, UV UniversalViewer/universalviewer#548 and user story #4 ?

Current state of play:
https://gist.github.com/tomcrane/8ca89f971d6571acab1016ba34c9dc85

Putting the mechanism of accessing Search (panel/popup etc) to one side for a moment, what does this icon do?

image

I think it reveals the non-painting content of the object, and the means to interact with it. Which, very often, involves a search box. That's one way of interacting with the non-painting content of the object; it's the only way the UV has at the moment of interacting with the non-painting content of the object, but in future, the UV can have other ways, such as:

  1. viewing the running transcript of AV in real time
  2. the text content of canvases, for reading, and copying OCR, transcription
  3. possible future complex interactions with the textual content of canvases, like Ocr universalviewer#424

The first phase is to make search happen here, because it belongs there and is a rearrangement of current UV functionality. It's only available when a search service is available. But that 'panel' becomes the home for other textual content interactions as well, which aren't separable from search. So later it may be available when textual content is available, even if no formal IIIF search service is available. You could still have an interaction that searched what text the UV can see directly.

@tomcrane
Copy link
Collaborator Author

Addendum to above comment. For running transcripts, you might wish to configure the UV to have the "text" content visible by default (that panel open, that dialogue visible, TBC). Which is a requirement for this:

https://user-images.githubusercontent.com/1443575/44481517-de718e80-a63d-11e8-85df-87773cb12079.PNG

(the UV needs to provide similar functionality)
Note that "Filter" in that image above is search... in the sense that it's likely to be hitting a IIIF search service.

By extension and if that facility arrives later, you might want the UV configured to present the textual panel open by default for image-based content - so I can see the text of a page of a book, and easily select that text. The kind of thing that happens outside of the UV here:

https://wellcomelibrary.org/moh/report/b18251341/2

(that is just a rendering of the anno list that the UV has access to from the manifest)

My issue... in this UI (and it's not necessarily the UI that we have to adopt) the textual content display occupies the same part of the UI as the search, and the presentation of search results shares a great deal with the presentation of the content that can be searched. After all, that content, and search results, are identical things as far as the UV is concerned - they are both lists of textual annotations. In some contexts they are very clearly the same thing to the user too, where whole annotations are returned as search results. Where the search service generates dynamic annotations for search results on the fly, it's not so clear (the differences between NLW search and Wellcome search, for example). But the UV can't tell how the publisher generates search results from queries, or how big the anno result text is likely to be (from a single word to an entire page transcript in one annotation).

How important is consistent treatment of presentation of text, and search within the textual content, for image based media (where the text is a transcription from an image) and time-based media (where the text is a transcription of the audio or video*)?

Should these all be aspects of the same part of the viewer, as my comment above?

I think I feel that there should be a common search treatment for time-based media and spatial media, and that if the UV is to show textual context of the object (#4) then that is somehow tied to search, which is search of textual content. Which means there is a link between how the UV displays the running transcript of a radio or TV news broadcast, and how you search a digitised book. There is a common thread of UI, they feel part of the same cluster of actions.

*There is another possibility here that we can maybe ignore for now... transcription of spatially targeted text in video; that is, text visible in a scene rather than audible.

# for free to join this conversation on GitHub. Already have an account? # to comment
Projects
None yet
Development

No branches or pull requests

1 participant