Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (2)
  • Open Access

    ARTICLE

    An Automatic Threshold Selection Using ALO for Healthcare Duplicate Record Detection with Reciprocal Neuro-Fuzzy Inference System

    Ala Saleh Alluhaidan1,*, Pushparaj2, Anitha Subbappa3, Ved Prakash Mishra4, P. V. Chandrika5, Anurika Vaish6, Sarthak Sengupta6

    CMC-Computers, Materials & Continua, Vol.74, No.3, pp. 5821-5836, 2023, DOI:10.32604/cmc.2023.033995 - 28 December 2022

    Abstract ESystems based on EHRs (Electronic health records) have been in use for many years and their amplified realizations have been felt recently. They still have been pioneering collections of massive volumes of health data. Duplicate detections involve discovering records referring to the same practical components, indicating tasks, which are generally dependent on several input parameters that experts yield. Record linkage specifies the issue of finding identical records across various data sources. The similarity existing between two records is characterized based on domain-based similarity functions over different features. De-duplication of one dataset or the linkage of… More >

  • Open Access

    ARTICLE

    Random Forests Algorithm Based Duplicate Detection in On-Site Programming Big Data Environment

    Qianqian Li1, Meng Li2, Lei Guo3,*, Zhen Zhang4

    Journal of Information Hiding and Privacy Protection, Vol.2, No.4, pp. 199-205, 2020, DOI:10.32604/jihpp.2020.016299 - 07 January 2021

    Abstract On-site programming big data refers to the massive data generated in the process of software development with the characteristics of real-time, complexity and high-difficulty for processing. Therefore, data cleaning is essential for on-site programming big data. Duplicate data detection is an important step in data cleaning, which can save storage resources and enhance data consistency. Due to the insufficiency in traditional Sorted Neighborhood Method (SNM) and the difficulty of high-dimensional data detection, an optimized algorithm based on random forests with the dynamic and adaptive window size is proposed. The efficiency of the algorithm can be More >

Displaying 1-10 on page 1 of 2. Per Page