We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
changed sbt URL
Updated Quickstart (markdown)
Updated some documentation on the tokenize examples: sentences.map(PTBTokenizer) NOT sentences.map(PTBTokenizer()), import breeze.text.analyze._
(pressed enter too early. Other change to get this working.)
I had to make the following changes in order to get the sentence parsing and stemming to work.
Given the following code, I believe this should by 100k and not 1 million
Created Quickstart (markdown)