Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (2)
  • Open Access

    ARTICLE

    Predicting Carpark Prices Indices in Hong Kong Using AutoML

    Rita Yi Man Li1, Lingxi Song2, Bo Li2,3, M. James C. Crabbe4,5,6, Xiao-Guang Yue7,*

    CMES-Computer Modeling in Engineering & Sciences, Vol.134, No.3, pp. 2247-2282, 2023, DOI:10.32604/cmes.2022.020930 - 20 September 2022

    Abstract The aims of this study were threefold: 1) study the research gap in carpark and price index via big data and natural language processing, 2) examine the research gap of carpark indices, and 3) construct carpark price indices via repeat sales methods and predict carpark indices via the AutoML. By researching the keyword “carpark” in Google Scholar, the largest electronic academic database that covers Web of Science and Scopus indexed articles, this study obtained 999 articles and book chapters from 1910 to 2019. It confirmed that most carpark research threw light on multi-storey carparks, management… More > Graphic Abstract

    Predicting Carpark Prices Indices in Hong Kong Using AutoML

  • Open Access

    ARTICLE

    Language-Independent Text Tokenization Using Unsupervised Deep Learning

    Hanan A. Hosni Mahmoud1, Alaaeldin M. Hafez2, Eatedal Alabdulkreem1,*

    Intelligent Automation & Soft Computing, Vol.35, No.1, pp. 321-334, 2023, DOI:10.32604/iasc.2023.026235 - 06 June 2022

    Abstract Languages–independent text tokenization can aid in classification of languages with few sources. There is a global research effort to generate text classification for any language. Human text classification is a slow procedure. Consequently, the text summary generation of different languages, using machine text classification, has been considered in recent years. There is no research on the machine text classification for many languages such as Czech, Rome, Urdu. This research proposes a cross-language text tokenization model using a Transformer technique. The proposed Transformer employs an encoder that has ten layers with self-attention encoding and a feedforward More >

Displaying 1-10 on page 1 of 2. Per Page