-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Develop Benchmark User Journey #11
Comments
Two phases perhaps:
This second question, which I think is the actual "benchmark user journey" this ticket represents is likely a deliverable at end of Immersion, after three weeks of research. |
Related to to user journey mapping, I think there is also a question of how to do a "SonoEval" — we need a list of questions that we can automatically evaluate, each question would have associated acceptance criteria. So this could be in phases too:
|
Could also just ask to inspect the final deliverable from the consulting company |
I'm intersted in documenting the catalog of questions a team of researchers might potentially ask of any corpus. It's clearly a spectrum. It's also partially dependent on the state of the art today - single researchers, scanning, current IR - versus what might be possible with a new product - e.g. TEAMs of experts actively creating collaborative inquiry experiences together that might themselves create new incremental knoweldge artifacts, and that could be used as inputs into generative-powered exploration and creation. I'm less interested in making a slightly more clever IR engine than in looking towards creating something that's fundamentally different and better than generative-powereed search. It might be a while until we can fully realize some of this, but I don't want to lose the longer term disruptive vision we might build towards. |
Request from Paul
The text was updated successfully, but these errors were encountered: