Selected article for: "large number and machine learning"

Author: Hull, Isaiah
Title: Dimensionality Reduction
  • Cord-id: ffj7nlhh
  • Document date: 2020_11_26
  • ID: ffj7nlhh
    Snippet: Many problem classes in machine learning are inherently high dimensional. Natural language processing problems, for instance, often involve the extraction of meaning from words, which can appear in an intractably large number of potential sequences in writing. In this chapter, we will discuss how principal component analysis (PCA), partial least squares (PLS), and autoencoder models can be used to reduce the dimensionality of such problems, rendering them tractable.
    Document: Many problem classes in machine learning are inherently high dimensional. Natural language processing problems, for instance, often involve the extraction of meaning from words, which can appear in an intractably large number of potential sequences in writing. In this chapter, we will discuss how principal component analysis (PCA), partial least squares (PLS), and autoencoder models can be used to reduce the dimensionality of such problems, rendering them tractable.

    Search related documents: