en
Kitaplar
Jacob Perkins

Python 3 Text Processing with NLTK 3 Cookbook

  • niodeyaalıntı yaptı4 yıl önce
    Most of the time, the default sentence tokenizer will be sufficient
  • niodeyaalıntı yaptı4 yıl önce
    Once you have a custom sentence tokenizer, you can use it for your own corpora
  • niodeyaalıntı yaptı4 yıl önce
    The PunktSentenceTokenizer class uses an unsupervised learning algorithm to learn what constitutes a sentence break. It is unsupervised because you don't have to give it any labeled training data, just raw text
  • niodeyaalıntı yaptı4 yıl önce
    This difference is a good demonstration of why it can be useful to train your own sentence tokenizer, especially when your text isn't in the typical paragraph-sentence structure
  • niodeyaalıntı yaptı4 yıl önce
    tokenizer. You can get raw text either by reading in a file, or from an NLTK corpus using the raw() method
fb2epub
Dosyalarınızı sürükleyin ve bırakın (bir kerede en fazla 5 tane)