iconOpen Access

ARTICLE

crossmark

An Update Method of Decision Implication Canonical Basis on Attribute Granulating

Yanhui Zhai1,2,*, Rujie Chen1, Deyu Li1,2

1 School of Computer and Information Technology, Shanxi University, Taiyuan, Shanxi Province, 030006, China
2 Key Laboratory of Computational Intelligence and Chinese Information Processing of Ministry of Education, Shanxi University, Taiyuan, Shanxi Province, 030006, China

* Corresponding Author: Yanhui Zhai. Email: email

(This article belongs to the Special Issue: Cognitive Granular Computing Methods for Big Data Analysis)

Intelligent Automation & Soft Computing 2023, 37(2), 1833-1851. https://doi.org/10.32604/iasc.2023.039553

Abstract

Decision implication is a form of decision knowledge representation, which is able to avoid generating attribute implications that occur between condition attributes and between decision attributes. Compared with other forms of decision knowledge representation, decision implication has a stronger knowledge representation capability. Attribute granularization may facilitate the knowledge extraction of different attribute granularity layers and thus is of application significance. Decision implication canonical basis (DICB) is the most compact set of decision implications, which can efficiently represent all knowledge in the decision context. In order to mine all decision information on decision context under attribute granulating, this paper proposes an updated method of DICB. To this end, the paper reduces the update of DICB to the updates of decision premises after deleting an attribute and after adding granulation attributes of some attributes. Based on this, the paper analyzes the changes of decision premises, examines the properties of decision premises, designs an algorithm for incrementally generating DICB, and verifies its effectiveness through experiments. In real life, by using the updated algorithm of DICB, users may obtain all decision knowledge on decision context after attribute granularization.

Keywords


1  Introduction

Formal Concept Analysis (FCA) is a data analysis and processing tool proposed by Prof. Wille [1]. FCA has been widely studied [25] and applied in machine learning [68], data mining [9,10], information retrieval [11,12], conflict analysis [1315] and recommendation systems [1618].

In FCA, a formal context is a two-dimensional table that reflects relationships between objects and attributes. Attribute implication is a formal representation of knowledge in formal contexts. Since the number of (attribute) implications extracted from formal context is very large, Qu et al. [19] proposed decision implication to reduce the implications that occur between condition attributes or between decision attributes. Zhai et al. [20] studied the semantical and syntactical characteristics of decision implication from a logical perspective. In particular, in the syntactical aspect, Zhai et al. [20] proposed two inference rules, namely AUGMENTATION and COMBINATION, to perform knowledge reasoning among decision implications. Zhai et al. [21] further investigated the influences of the order and number of times of applying inference rules on knowledge reasoning and designed an optimal reasoning strategy.

The number of decision implications in decision contexts, however, is still huge. Thus, Zhai et al. [22] introduced decision premise and decision implication canonical basis (DICB), and proved that DICB is a complete, non-redundant and optimal set of decision implications. In other words, DICB can not only keep all the information in decision contexts (i.e., completeness), but also contain the least number of decision implications among all complete sets of decision implications (optimality). To generate DICB efficiently, Li et al. [23] proposed a true premise-based generation algorithm for DICB, and Zhang et al. [24] proposed an incremental generation algorithm for DICB.

In fact, researchers have investigated other forms of knowledge representation and reasoning besides decision implication, such as concept rules and granular rules [25,26]. Concept rules are decision implications where both premises and consequences are intents [25], and granular rules are decision implications where both premises and consequences are granular intents [25]. Based on these works, Qin et al. [27] investigated attribute (object)-oriented decision rule acquisition; Xie et al. [28] discussed decision rule acquisition in multi-granularity decision context; Zhi et al. [29] proposed a fuzzy rule acquisition method based on granule description in fuzzy formal context; Hu et al. [30] studied rules whose premises and consequences are dual (formal) concepts and formal (dual) concepts. Zhang et al. [31] compared concept rule, granular rule and decision implication, and found that concept rule has stronger knowledge representation capability than granular rule and decision implication has stronger knowledge representation capability than granular rule and concept rule, i.e., decision implication is the strongest form of knowledge representation and reasoning on decision context. Thus, it is recommended in [21] that a knowledge representation and reasoning system based on decision implication can be constructed in applications by using DICB as the knowledge base and CON-COMBINATION and AUGMENTATION as the inference engine [21].

On the other hand, since granular computing [32] is able to solve complex problems with multi-level structures and facilitate knowledge acquisition at different levels and granulations, many granulation methods have been developed in FCA such as relational granulation [33,34], attribute granulation [3541], and object granulation [42]. This paper mainly focuses on knowledge discovery on attribute granulating.

The existing attribute granulation methods mainly focus on how to update concept lattice after transforming attribute granulation [35], aiming at reducing the complexity of regenerating concept lattice. For example, Belohlavek et al., Wan et al. and Zou et al. proposed some algorithms to update concept lattice on attribute granulating [3537]; Shao et al. [38] proposed an algorithm to update object(attribute)-oriented multi-granularity concept lattice.

The existing studies did not take into account knowledge discovery on attribute granulating. Because decision implication is superior to concept rule and granular rule, the paper will examine the update of decision implication on attribute granulating; in particular, since DICB can efficiently represent all the information of decision implication [22], it is sufficient to examine the update of DICB. Thus, the aim of the paper is to examine the relationship between DICBs before and after granulation and find a more efficient method to generate the DICB of the granulation context.

This paper is organized as follows. Section 2 reviews the related concepts and properties of decision implication and DICB; Section 3 introduces granulation context and discusses the updates of decision premises (the premises of DICB) on attribute granulating; Section 4 proposes an incremental method for updating DICB; Section 5 concludes the paper.

2  Basic Notions

This section reviews the basic concepts and conclusions of decision implication and DICB; further details can be found in [19,20].

Definition 1 [19]: Decision context K is a triple K=(G,CD,ICID), where G denotes the set of objects, C denotes the set of condition attributes, D denotes the set of decision attributes, CD=, ICG×C denotes the set of incidence relations between objects and condition attributes, and IDG×D denotes the set of incidence relations between objects and decision attributes. For gG,mC or mD, (g,m)IC or (g,m)ID means “object g has attribute m”.

Definition 2 [19]: Let K=(G,CD,ICID) be a decision context. For AG, B1C and B2D, we define:

(1)   AC={mC|(g,m)IC,gA}

(2)   AD={mD|(g,m)ID,gA}

(3)   B1C={gG|(g,m)IC,mB1}

(4)   B2D={gG|(g,m)ID,mB2}

Proposition 1 [19]: Let K=(G,CD,ICID) be a decision context. For A1,A2C and B1,B2D, we have:

(1)   A1A2A2CA1C, B1B2B2DB1D

(2)   (A1A2)C=A1CA2C, (B1B2)D=B1DB2D

(3)   (A1A2)CA1CA2C, (B1B2)DB1DB2D

Definition 3 [20]: Let K=(G,CD,ICID) be a decision context. For AC and BD, if ACBD, then AB is called a decision implication of K, where A is the premise of AB and B is the conclusion of AB.

Definition 4 [22]: Let K=(G,CD,ICID) be a decision context. A set AC is called a decision premise of K, if A satisfies the following conditions:

(1)   A is minimal with respect to ACD, i.e., if AiA, then AiCDACD;

(2)   A is proper, i.e.,

ACD{AiCD|AiA is a decision premise of K}={AiCD|AiA}(1)

The set

K={AACD|A is decision premise of K}(2)

is called the decision implication canonical basis (DICB) of K.

Zhai et al. [22] proved that for any decision context, DICB is complete, non-redundant, and optimal. In other words, DICB can be considered as a complete (completeness) and compact (non-redundancy and optimality) knowledge representation in decision context. Furthermore, Li et al. [23] proved that the properness of decision premise implies minimality, i.e., A is a decision premise if and only if A is proper.

3  DICB on Attribute Granulating

This section introduces granulation context based on attribute granularity refinement [39] and studies the update of decision premises.

Definition 5: Let K=(G,CD,ICID) be a formal context and aC. The attribute a can be refined to a set of attributes {a1,a2,,an}, where aiC, aiCajC= for ij, and i=1naiC=aC. The decision context Ka=(G,CaD,ICaID) is called a granulation context of K, where Ca=C{a}{a1,a2,,an} and ICa={(g,m)|gG,mCa,gmC}.

It can be seen from Definition 5 that the refinement of a divides a into n mutually exclusive attributes {a1,a2,,an}, and any object cannot have any two of them at the same time. In other words, aiC, i=1,2,,n constitute a partition of aC. Obviously, we have 1<n|aC|. In granulation context, since the attribute a has been refined, Ca thus removes a from C and adds the refined attributes {a1,a2,,an}.

Example 1. A decision context for bank marketing is shown in Table 1, where G={g1,g2,g3,g4,g5,g6,g7,g8,g9} containing nine customers, and C={a,b,c,d,e,f,g} and D={h,i} representing the features of the customers, where a stands for whether the customer has a job, b stands for whether the customer is married, c stands for whether the customer has a bachelor’s degree or above, d stands for whether the customer has a credit default, e stands for whether the customer has a home loan, f stands for whether the customer has a personal loan, g stands for whether the customer has contacted other customers within six months, h stands for whether the customer has applied for a loan, and i stands for whether the customer will make a fixed deposit.

images

When attribute a is granulated, “has a job” is granulated into “doctor”, “teacher”, “civil servant”, “self-employed” and “other professions”, recorded as, a1: doctor, a2: teacher, a3: civil servant, a4: self-employed, a5: other profession.

The granulation context is shown in Table 2, where G={g1,g2,g3,g4,g5,g6,g7,g8,g9}, Ca=C{a}{a1,a2,a3,a4,a5}={b,c,d,e,f,g}{a1,a2,a3,a4,a5}={a1,a2,a3,a4,a5,b,c,d,e,f,g}, and D={h,i}.

images

According to [22], DICB can retain all the knowledge in decision context. Therefore, DICB after attribute granulation can also retain all the knowledge in granulation context. Furthermore, since decision premise determines DICB [22], the update of DICB can be reduced to the update of decision premises.

To simplify discussion, this paper divides the updates of decision premises into two steps, namely, the update of decision premises after deleting attribute a and the update of decision premises after adding granulation attributes {a1,a2,,an}.

3.1 The Updates of Decision Premises after Deleting Attribute

After attribute a is removed from K=(G,CD,ICID), the new decision context is denoted as:

Ka¯=(G,Ca¯D,Ia¯ID)(3)

where Ca¯ = C{a} and Ia¯G×Ca¯IC. In order to avoid confusion, the operation (.)C in Ka¯ is denoted as (.)C¯. Since D does not change, (.)D is applicable in Ka¯.

Because decision premise is defined by the operator (.)CD, in order to examine the update of decision premise from K to Ka¯, it is necessary to examine the update from ACD to ACD¯, i.e., from ACD to AC¯D.

First, the following properties are given.

Proposition 2: For decision contexts K and Ka¯, let AC. Then, the following conclusions hold:

(1)   If aA, then AC¯=AC, namely AC¯D=ACD

(2)   If aA, then AC¯DaCDACD

(3)   AC¯DACD

Proof: (1) If aA, by the definitions of (.)C and (.)C¯, we can obtain AC¯=AC and thus AC¯D=ACD.

(2) If aA, by the definitions of (.)C and (.)C¯, we have AC=(A{a}a)C=(A{a})CaC=AC¯aC. Thus, we obtain ACD=(AC¯aC)DAC¯DaCD and thus AC¯DaCDACD.

(3) It follows from (1) and (2).

In order to generate the decision premises of Ka¯ by the decision premises of K, we classify the changes of decision premises from K to Ka¯ into three cases:

(1)   If A is a decision premise of K and A is also a decision premise of Ka¯, A is an unchanged decision premise.

(2)   If A is a decision premise of K and A is not a decision premise of Ka¯, A is an invalid decision premise.

(3)   If A is not a decision premise of K and A is a decision premise of Ka¯, A is a new decision premise.

The following conclusion identifies the characteristics of unchanged decision premises.

Proposition 3: For decision contexts K and Ka¯, if A is a decision premise of K, then A is an unchanged decision premise if and only if aA.

Proof: Sufficiency: By Proposition 2, if aA, then we have AC¯D=ACD. Furthermore, for Aj such that AjA, since aAj, we have AjC¯D=AjCD. Because A is a decision premise of K, we have AC¯D=ACD{AjCD|AjA}={AjC¯D|AjA}. According to Definition 4, A is a decision premise of Ka¯.

Necessity: Since A is an unchanged decision premise, A is a decision premise of Ka¯.

The following conclusion identifies the characteristics of invalid decision premises.

Proposition 4: For decision contexts K and Ka¯, if A is a decision premise of K, then A is an invalid decision premise if and only if aA.

Proof: If A is a decision premise of K, then A is an invalid decision premise or an unchanged decision premise. In this case, if A is an invalid decision premise, A is not an unchanged decision premise and by Proposition 3, we have aA. Conversely, if aA, by Proposition 3, A is not an unchanged decision premise and A must be an invalid decision premise.

The following result shows that there does not exist new decision premise in Ka¯.

Proposition 5: For decision context Ka¯, there does not exist new decision premise.

Proof: Assume that A is a new decision premise of Ka¯. Since A is a decision premise of Ka¯, by the definition of Ka¯, we have aA and by Proposition 2, we have AC¯=AC. Thus, there does not exist new decision premise in Ka¯.

It can be seen from Proposition 5 that decision premises from K to Ka¯ can be divided into two categories: unchanged decision premise and invalid decision premise. Because the invalid decision premises do not hold in Ka¯, the decision premises in Ka¯ only contain the unchanged decision premises of K. In other words, no new decision premises should be added after removing the condition attribute, and new decision premises appear after adding granulation attributes.

3.2 The Updates of Decision Premise after Adding Granulation Attributes

This section examines the update of decision premise in Ka from decision premise in Ka¯ after adding the granulation attributes {a1,a2,,an} to Ka¯.

Similarly, we classify the changes of decision premises from Ka¯ to Ka into three cases:

(1)   If A is a decision premise of Ka¯ and A is also a decision premise of Ka, A is an unchanged decision premise.

(2)   If A is a decision premise of Ka¯ and A is not a decision premise of Ka, A is an invalid decision premise.

(3)   If A is not a decision premise of Ka¯ and A is a decision premise of Ka, A is a new decision premise.

In order to avoid confusion, the operation (.)C in Ka is denoted as (.)C~. Since D does not change, (.)D is applicable in Ka. The properties of (.)C¯ and (.)C~ in Ka¯ and Ka are given below.

Proposition 6: For decision contexts Ka¯ and Ka, and ACa, the following conclusions hold:

(1)   If ACa¯, then AC¯=AC~, AC¯D=AC~D

(2)   If ACa¯, then (ACa¯)C¯DAC~D.

Proof: (1) By the definitions of (.)C~ and (.)C¯, it is obvious.

(2) By (1), we have (ACa¯)C¯D=(ACa¯)C~D, and by ACa¯A, we have (ACa¯)C~DAC~D and thus (ACa¯)C¯DAC~D.

The following conclusion identifies the characteristics of unchanged and invalid decision premises from Ka¯ to Ka.

Proposition 7: For decision contexts Ka¯ and Ka, if A is a decision premise of Ka¯, then A is a decision premise of Ka.

Proof: Since A is a decision premise of Ka¯, we have ACa¯Ca and by Proposition 6, we have AC~D=AC¯D; furthermore, for Aj satisfying AjACa¯Ca, we have AjC~D=AjC¯D and thus AC~D=AC¯D{AjC¯D|AjA}={AjC~D|AjA}. It can be seen from Definition 4 that A is a decision premise of Ka¯.

Proposition 7 shows that all the decision premises of Ka¯ are the unchanged decision premises of Ka. Thus, according to the classification of decision premises of Ka, there does not exist invalid decision premise from Ka¯ to Ka.

Next, we discuss the new decision premises in Ka.

Proposition 8: If A is a new decision premise from Ka¯ to Ka, then there exists only one ai{a1,a2,,an} such that aiA.

Proof: First, we will prove that there exists ai{a1,a2,,an} such that aiA. Since A is a new decision premise from Ka¯ to Ka, A is not a decision premise of Ka¯ and A is a decision premise of Ka, i.e., we have AC¯D={AjC¯D|AjA} and AC~D{AjC~D|AjA}. If there does not exist ai{a1,a2,,an} such that aiA, we have ACa¯. By Proposition 6, we have AC~D=AC¯D; similarly, for Aj satisfying AjACa¯Ca, we have AjC~D=AjC¯D and thus AC~D=AC¯D={AjC¯D|AjA}={AjC~D|AjA}, which contradicts with AC~D{AjC~D|AjA}. Thus, we have ACa¯, and there must exist ai{a1,a2,,an} such that aiA.

Next, suppose that more than one attribute in {a1,a2,,an} may be contained in A, say, {ai1,ai2,,aik}A, 2kn.

By the definition of redundant decision implication, a decision implication EF is redundant with respect to a set L of decision implications if and only if for any TCaD, if T satisfies L, then T also satisfies EF. Therefore, if any TCaD satisfies EF, then EF is redundant with respect to any L.

For any TCaD, by the definition of granulation attributes, there exists at most one ak{a1,a2,,an} such that akT. Therefore, we have {ai1,ai2,,aik}T, and since {ai1,ai2,,aik}A, we have AT, i.e., T satisfies AAC~D. Thus, AAC~D is a redundant decision implication of Ka. Since DICB is non-redundant, A is not a decision premise, which contradicts with the fact that A is a new decision premise of Ka.

By the discussion in Sections 3.1 and 3.2, for a decision premise A of K, if aA, by Proposition 3 and Proposition 7, A must be a decision premise of Ka¯ and also a decision premise of Ka. In addition, by Proposition 7 and 8, the decision premises of Ka also include the new decision premises of the form {ai}E, where ai{a1,a2,,an} and ECa¯, which can be generated based on true premise, as shown in Section 4.

4  Algorithm for Updating DICB on Attribute Granulating

By Section 3, some decision premises of Ka can be directly obtained by judging whether the attribute a is contained in the decision premises A of K, and other decision premises of Ka is of the form {ai}E. By Definition 4, in order to determine whether {ai}E is a decision premise, it is necessary to determine whether each subset of {ai}E is a decision premise. Due to the complexity of enumerating all the subsets of {ai}E, we propose a true premise [23] based algorithm for determining whether {ai}E is a decision premise, as well as updating DICB.

Firstly, Definition 6 gives the definition of true premise.

Definition 6 [23]: Let K=(G,CD,ICID) be a decision context. For AC and BD, A is a true premises of B, if A satisfies the following conditions:

(1)   A is a premise of B (i.e., ACBD).

(2)   For any AiA, Ai is not a true premise of B.

The following theorem shows the equivalence of the decision premise and the true premise.

Theorem 1 [23]: Let K=(G,CD,ICID) be a decision context and AC. Then, A is a decision premise if and only if A is a true premise of some dACD.

According to Theorem 1, we can determine whether {ai}E is a decision premise of Ka by determining whether {ai}E is a true premise of some decision attributes. In other words, in order to generate the decision premises of the form {ai}E, it is sufficient to generate all the true premise of some dD that contains ai.

Definition 7 defines the symbols needed to calculate true premise.

Definition 7 [23]: Let K=(G,CD,ICID) be a decision context. For dD and gG, denote d={gG|dgD} and define gd if the following conditions are satisfied:

(1)   gd;

(2)   For hG, if gChC, then dhD.

For dD, we denote Φ(d)={gG|gd}.

Definition 8 gives the concepts of candidate premise and candidate true premise, which are used in the process of generating true premise.

Definition 8 [23]: Let K=(G,CD,ICID) is a decision context. For AC, dD and PΦ(d), we define:

(1)   If P=, or for any gP, we have AgC, then A is called a candidate premise of d under P, denoted by APd; otherwise, we denote APd.

(2)   If APd, and for any AiA, we have AiPd, then A is called a candidate true premise of d under P, denoted by APd; otherwise, we denote APd.

Proposition 9 [23]: Let K=(G,CD,ICID) be a decision context. For AC and dD, we have:

(1)   If AΦ(d)d, then A is a premise of d.

(2)   If AΦ(d)d, then A is a true premise of d.

By Definition 7 and Proposition 9, true premise can be identified based on Φ(d) and on d. Since d is unchanged on attribute granulating, we need to discuss the change of Φ(d) on attribute granulating. In Ka, Φ(d) is denoted as Φ(d).

Proposition 10: Let K=(G,CD,ICID) be a decision context and Ka=(G,CaD,ICaID) be a granulation context of K, and dD. Then we have:

(1)   Φ(d)Φ(d).

(2)   Φ(d)=Φ(d){g(aCd)Φ(d)|For any hdsuch that hCgC,there does not exist aiCasuch that {g,h}aiC~}.

Proof: (1) By the definitions of Φ(d) and Φ(d), it is sufficient to prove that for gΦ(d), there does not exist hd in Ka such that gC~hC~.

If gaC, by the granulation process of a in Definition 5, for any aiCa, we have gaiC~, i.e., gC=gC~. In Ka, for hd, we need to consider two cases: (a) If for any aiCa, we have haiC~, we obtain hC=hC~. From gΦ(d), it follows that there does not exist hd such that gChC in K, i.e., there does not exist hd such that gC~hC~. (b) If there exists aiCa such that haiC~, by Definition 5, there exists only one aiCa such that haiC~. Then, we have hC~=hC{a}{ai}. Assume gC=gC~hC~. By gaC, it is easy to prove that gChC{a}hC. By the definition of Φ(d) and gΦ(d), we have dhD and thus hd, which contradicts with hd. Therefore, there does not exist hd such that gC~hC~.

If gaC, we assume gΦ(d), i.e., there exists hd in Ka such that gC~hC~. By gaC and the granulation process of a, there exists only one aiCa such that gaiC~, and since gC~hC~, we have haiC~ and thus {g,h}aiC~. Since aiC~aC, we have {g,h}aC. From hC~=hC{a}{ai} and gC~=gC{a}{ai}, it follows hC{a}{ai}gC{a}{ai}, i.e., hCgC. By the definition of Φ(d) and gΦ(d), we have dhD and thus hd, which contradicts with hd. Thus, there does not exist hd such that gC~hC~.

(2) By the definition of Φ(d), we have Φ(d)={gd|for any hd,there is hC~gC~}. By Conclusion (1), we have Φ(d)Φ(d)d and thus Φ(d)=Φ(d){gdΦ(d)|for any hd,hC~gC~ holds}.

Next, we prove that for any gdΦ(d), we have agC. Assume agC, i.e., gaC. By gΦ(d), there exists hd in K such that hCgC. We need to consider two cases. (a) If haC, by the granulation process of a in Definition 5, there exists only one aiCa such that haiC~. In this case, we have hC~=hC{a}{ai}. By agC, we have hC{a}gC and thus hC~gC. By gaC and the proving process of (1), it is easy to prove gC=gC~, and we have hC~gC~, i.e., h satisfies hC~gC~ and hd. Thus, we have gΦ(d), which contradicts with gΦ(d). Thus, we obtain gaC. (b) If haC, by gaC and the proving process of (1), we have gC=gC~ and hC=hC~, and thus hC~=hCgC=gC~. Since hC~gC~ and hd, we have gΦ(d), which contradicts with gΦ(d). Thus, we obtain gaC.

Combining with Φ(d)=Φ(d){gdΦ(d)|for any hd,hC~gC~holds}, we obtain Φ(d)=Φ(d){g(aCd)Φ(d)|for any hd,hC~gC~holds}. It is sufficient to prove that for g(aCd)Φ(d), the following two conditions are equivalent: Condition 1: For any hd, h satisfies hC~gC~, and Condition 2: For any hd such that hCgC, there does not exist aiCa such that {g,h}aiC~.

First, we prove that if g(aCd)Φ(d) satisfies Condition 1, then g also satisfies Condition 2. Assume that there exists hd such that hCgC, and that there exists aiCa such that {g,h}aiC~. By gaC, we have agChC and thus {g,h}aC. Combining {g,h}aC, {g,h}aiC~ with the granulation process of a, we obtain hC~=hC{a}{ai} and gC~=gC{a}{ai}; since hCgC, we have hC~gC~. Thus, there is hd such that hC~gC~, which is contradictory to Condition 1.

Next, we prove that if g(aCd)Φ(d) satisfies Condition 2, then g also satisfies Condition 1. Assume hd. If hCgC, according to Condition 2, there does not exist aiCa such that {g,h}aiC~. In this case, by gaC and haC, according to the granulation process of a, there must exist aj,akCa such that gajC~ and hakC~; because there does not exist aiCa such that {g,h}aiC~, we have ajak, gakC~, and hajC~, i.e., hC~gC~. If hCgC, we have hCgC or hC=gC. If hC=gC, we have hC~=gC~, i.e., hC~gC~. If hCgC, we will prove hC~gC~ in two cases. (a) If ahC, since hCgC, there must exist b1C such that b1a, b1gC, and b1hC. It is easy to prove hC~gC~. (b) If ahC, by the granulation process of a, there does not exist ai{a1,a2,,an} such that aihC~. By gaC and the granulation process of a, there is ai{a1,a2,,an} such that aigC~. Thus, we have hC~gC~.

Next, we will generate the true premises of d that contains ai, starting from determining whether ai is a true premise of d.

Proposition 11: Let Ka=(G,CaD,ICaID) be a granulation context, dD, and ai{a1,a2,,an}. Setting P=Φ(d)aiC~, then we have {ai}Pd.

Proof: For any gP, we have aigC~, i.e, {ai}gC~, and thus {ai}Pd. Furthermore, since {Ai|Ai{ai}}={} and gC~, i.e., Pd, by Definition 8, we have {ai}Pd.

At this time, we obtain the candidate true premise {ai} of d under P according to Proposition 11. Next, we need to gradually increase P to Φ(d) to determine whether {ai} is the true premise of d. In the incremental process, reference [23] classified candidate true premises into three cases (Definition 9) and discussed their properties (Proposition 12).

Definition 9 [23]: Let K=(G,CD,ICID) be a decision context. For AC, dD, PΦ(d), and gΦ(d)P, we define:

(1)   If APd and AP{g}d, then A is called an unchanged candidate true premise of d under P{g}.

(2)   If APd and AP{g}d, then A is called an invalid candidate true premise of d under P{g}.

(3)   If APd and AP{g}d, then A is called a new candidate true premise of d under P{g}.

Proposition 12 [23]: Let K=(G,CD,ICID) be a decision context. For AC, dD, PΦ(d) and gΦ(d)P, we have:

(1)   A is an unchanged candidate true premise of d under P{g} if and only if APd and AgC.

(2)   A is an invalid candidate true premise of d under P{g} if and only if APd and AgC.

(3)   A is a new candidate true premise of d under P{g} if and only if the following conditions hold:

a)   APd

b)   There exists an invalid candidate true premise Am satisfying A=Am{a} for aCgC

c)   For any AiA, we have AiP{g}d.

By Definition 9 and Proposition 12, the objects in the set Φ(d)aiC~ can be gradually added into P. If {ai}Φ(d)d, then {ai} is a true premise of d; otherwise {ai} is not a true premise of d. After adding g in Φ(d)aiC~ to P, by Definition 9, {ai} may be an unchanged or an invalid candidate true premise of d under g. If {ai} is an unchanged candidate truth premise, we can continue to add the remaining objects in Φ(d)aiC~ to P. If {ai} is an invalid candidate truth premise, by Proposition 12(3), A={ai}{b} can be generated, where bCagC. In this case, it needs to further judge whether A={ai,b} is a new candidate true premise of d under P{g} by Proposition 12(3). If {ai,b} is a new candidate true premise, the remaining objects in Φ(d)aiC~ should be gradually added to P{g} to determine whether {ai,b}Φ(d)d holds according to the above process. By repeating the above process, one can generate all the true premises of d of the form {ai}E.

The above process can be further optimized. For example, Proposition 13 shows that if ai satisfies some conditions, ai is a true premise of d. In this case, except ai, all the sets of the form {ai}E (E) may not be the true premises of d, and there is no need to determine other sets {ai}E.

Proposition 13: Let Ka=(G,CaD,ICaID) be a granulation context, dD, Φ(d) and ai{a1,a2,,an}. If aiC~Φ(d)=, then {ai} is a true premise of d.

Proof: Let P={g|gΦ(d)aiC~}. If aiC~Φ(d)=, then we have P=Φ(d). By Proposition 12, we obtain {ai}Pd, i.e., {ai}Φ(d)d, and by Proposition 9, we know that ai is a true premise of d.

A true premise-based incremental method can then be proposed for generating DICB under attribute granulation, as shown in Algorithm 1.

images

In Algorithm 1, we first generate the unchanged decision premises by the decision premises of K according to the results in Section 3 (Steps 2–6), and then generate the new decision premise of Ka by generating the true premises of each d that contain ai (Steps 7–20). In Steps 7–20, for a decision attribute d, we first generate Φ(d) by the function get_newgd (Step 8), then generate the new decision premises {ai}E for each ai by generator_newdp (Steps 10–13), and finally generate the decision implications of the new decision premises (Steps 14–18). In Steps 10–13, by Proposition 13, if aiC~Φ(d)=, only ai is a true premise of d in the sets {ai}E. Otherwise, it is necessary to gradually add the objects in aiC~Φ(d) by the function generator_newdp to obtain dpai, the set of all the true premises of d of the form {ai}E.

Algorithm 2 presents the function get_newgd for generating Φ(d).

images

Algorithm 2 generates Φ(d) according to Proposition 10. After initializing Φ(d) (Step 1), Algorithm 2 uses the function getAll_gd in [23] to generate Φ(d) of K (Step 2). If (aCd)Φ(d)=, by Proposition 10(2), one should keep Φ(d)=Φ(d) (Step 4); otherwise Φ(d) should be generated by Proposition 10 (Steps 6–14).

Algorithm 3 presents the function generator_newdp to generate all the true premises of d of the form {ai}E.

images

Algorithm 3 first initializes dpai to {{ai}} according to Proposition 13 (Step 1), and initializes P to {g|gΦ(d)aiC~} (Step 2). Then, Algorithm 3 adds the elements in aiC~Φ(d) to P and judges whether {ai} is a true premise of d according to Proposition 9 (Steps 3–19). In the process (Steps 3–19), if {ai}gC~, by Proposition 12 and Definition 9, we have {ai}P{g}d, and continue to add the elements in aiC~Φ(d) and determine whether {ai} is a true premise of d. If {ai}gC~ (Step 6), by Proposition 12, {ai} is an invalid candidate true premise of d under P{g}. By Definition 8(1) and gΦ(d), we have {ai}Φ(d)d, and by Definition 8(2), we have {ai}Φ(d)d. Thus, {ai} is not a true premise of d (step 7). In this case, according to Definition 9, {ai} is an invalid candidate true premise, and according to Proposition 12(3), a new candidate true premise with respect to g can be generated based on {ai}. To this end, by Proposition 12(3)(b), bCagC~ can be added to {ai}. By Proposition 8, the decision implications whose premises contain two or more granulation attributes are redundant. Thus, it is sufficient to add CagC~{a1,a2,,an} to {ai} (Steps 8–9). In order to determine whether {ai}{b} is a new candidate true premise of d under P{g}, one should check whether Proposition 12(3)(a) and Proposition 12(3)(c) are true. For Condition (a), {ai}Pd holds by {ai}dpai in Step 5, and {ai}{b}Pd holds by Definition 8 (otherwise, {ai}Pd holds); in other words, Condition (a) holds. Thus, Algorithm 3 simply checks whether Proposition 12(3)(c) holds in Steps 10–15. It should be noted that P is equal to (Φ(d)aiC~){g} at this iteration.

For the candidate true premise {ai}{b} that satisfies Condition (c), by Proposition 12(3), {ai}{b} is a new candidate true premise of d under P{g}=(Φ(d)aiC~){g}, i.e., we have {ai}{b}(Φ(d)aiC~){g}d. Therefore, in the next iteration (Steps 3–19), we only need to gradually increase the objects in aiC~Φ(d){g}. Similarly, in Steps 3-19, we should first determine whether A={ai,b} is an invalid candidate true premise of d under P{g} according to Proposition 12 (Step 6); if {ai,b}gC~, {ai,b} is an invalid candidate true premise of d under P{g}, and should be deleted by Definitions 8(1) and 8(2) (Step 7). In this case, new candidate true premises can be generated by Steps 8–16.

Example 2. (Continuing Example 1) For the decision context K in Table 1 and the granulation context Ka in Tables 2, 3 lists the DICB of K.

images

According to Algorithm 1 (Steps 1–6), the unchanged decision premises can be computed as {{f},{e,g},{c,d},{d,g},{e,b,c},{e,f},{d,f}}.

Next, we calculate the true premise of {ai}E for each decision attribute. Take the decision attribute h as an example. Firstly, we need to calculate Φ(h) of Ka according to Algorithm 2. According to Ka, we have h={g1,g5,g7,g9}, Φ(h)={g1,g7,g9}, and aC={g1,g2,g3,g4,g5,g6,g7,g8}. By (aCh)Φ(h)={g5} and Algorithm 2, there does not exist aiCa satisfying {g1,g5}aiC~ or {g7,g5}aiC~, so we have Φ(h)=Φ(h){g5}={g1,g5,g7,g9}.

Next, we calculate the true premises of h with the form {ai}E. According to Table 2, we have a1C~={g1,g7}, a2C~={g2,g6}, a3C~={g3,g4}, a4C~={g8} and a5C~={g5}, where a2C~Φ(h)=a3C~Φ(h)=a4C~Φ(h)=. Thus, {a2},{a3} and {a4} are the true premises of h. For a5, according to Algorithm 3, we can gradually increase a5C~Φ(h)={g5} to {a5}. Since {a5}g5C~, we add Cag5C~{a1,a2,a3,a4} to {a5} to obtain the candidate true premises {{a5,b},{a5,d},{a5,e},{a5,f},{a5,g}}. However, we have {f}Φ(h)h; therefore, the true premises of h containing a5 are {{a5,b},{a5,d},{a5,e},{a5,g}}. Similarly, it can be obtained that the true premises of h containing a1 are {{a1,d},{a1,b,e}}.

Similarly, we can obtain the true premises of i with the form of {ai}E, i.e., {{a1,b}, {a1,f},{a1,g},{a2,e}, {a2,b,c},{a3,c},{a3,e,g},{a3,e,b},{a5,b},{a5,d},{a5,e},{a5,g},{a5}}.

Table 4 lists the DICB of Ka.

images

5  Experimental Verification

In order to verify the effectiveness of the proposed method, we conduct some experiments in real data sets. We selected four UCI data sets, performed preprocessing such as removing missing values and normalizing continuous values, and generated the binary data sets according to the threshold of 0.5. Since the true premise-based algorithm (MBTP) in [23] is the most efficient method for generating DICB, especially when |G|/|C| is less than 40, we select the MBTP algorithm as the baseline algorithm for comparison. In order to produce a stable performance, the ratio of |G|/|C| is set to 20 and the number of condition attributes can be derived accordingly, as shown in Table 5.

images

In the experiments, MBTP is used to directly generate the DICB of granulation context and the proposed method, called the InCremental Method (ICM), is used to generate the DICB of granulation context by updating the DICB of decision context. In order to analyze the influence of the number of granulations on the experimental results, we randomly selected about one fifth of the number of condition attributes and refined them into 2, 3 and 4 fine-grained attributes respectively.

The average consuming times of generate DICB for each refinement are summarized in Table 6.

images

According to Table 6, on the whole, ICM is more efficient than MBTP. For the data sets whose DICB can be generated within about one minute, such as bank8FM and credit rating, the consuming times may depend more on runtime environments than other data sets. Thus, the comparison results may be unconvincing. For the other data sets, i.e., supermarket and hypothyroid, MBTP will take about twenty minutes to generate DICB, whereas ICM only takes half the consuming time of MBTP, i.e., about ten minutes to achieve the same goal.

It should be noted that the consuming time of ICM only contains the updating time from the DICB of decision context to the DICB of granulation context, excluding the consuming time of generating DICB of decision context. Thus, the comparison results above do not imply that ICM is more efficient than MBTP in generating DICB. However, for one thing, since ICM presupposes the presence of DICB of decision context, the comparison is reasonable; for another, since in reality, there may be a continual requirement of attribute granulation, ICM will become more efficient than MBTP because the time of generating DICB of decision context is consumed once and will be balanced finally by the subsequent updating profit.

6  Conclusion and Further Works

In order to reduce the complexity of regenerating decision implications on attribute granulation, this paper stated that the update of decision implications can be accomplished by the update of DICB. Thus, the paper discussed the properties of DICB on attribute granulation and designed an incremental method for updating DICB. Experiment results show that the update of DICB is more efficient than generation from decision context.

It can also be observed that the proposed method has some limitations. For example, the proposed method only considers the refinement of one condition attribute in the update of DICB. Although the updates of multiple attributes can be completed by applying the proposed method to multiple attributes many times, it may be inefficient.

In addition, this paper only examines the update of DICB after condition attribute granulation and does not take decision attribute granulation into consideration. The results in [20,21] show that condition attributes and decision attributes in decision implications are not symmetrical, as reflected in the semantical and syntactical aspects of decision implication. For example, in the syntactical aspect, by AUGMENTATION, one can augment the premises (condition attributes) and reduce the consequences (decision attributes), implying that one cannot apply the proposed method in the paper to the case of decision attribute granulation. Therefore, it is necessary to study the update of decision implications under decision attribute granulation, not only for generating DICB but also for deeply understanding the system of decision implications.

Finally, the proposed method can also be combined with other applications [43]. For example, through attribute refinement, some low-level features in image, such as texture, can be extracted and refined into high-level features, such as texture direction and texture perimeter. By calculating a compact knowledge base such as DICB, features with more semantical information can be obtained. In other words, the information in image can be fully utilized to segment the images, thus improving the accuracy and robustness of segmentation.

Acknowledgement: This work was supported by the National Natural Science Foundation of China (Nos. 61972238, 62072294).

Funding Statement: This research was funded by the National Natural Science Foundation of China (www.nsfc.gov.cn) under Grant No. 61972238 (received by Y. H. Zhai) and No. 62072294 (received by D. Y. Li).

Conflicts of Interest:: The authors declare that they have no conflicts of interest to report regarding the present study.

References

1. R. Wille, “Restructuring lattice theory: An approach based on hierarchies of concepts,” In: I. Rival (Eds.Ordered Sets, Berlin Heidelberg: Springer, pp. 445–470, 1982. [Google Scholar]

2. H. L. Zhi and J. H. Li, “Granule description based on formal concept analysis,” Knowledge-Based Systems, vol. 104, pp. 62–73, 2016. [Google Scholar]

3. Y. H. Zhai, D. Y. Li and K. S. Qu, “Fuzzy decision implications,” Knowledge-Based Systems, vol. 37, pp. 230–236, 2013. [Google Scholar]

4. C. Carpineto and G. Romano, “Rule mining,” in Concept data analysis: Theory and applications, 1st ed., Hoboken, NJ: Wiley, pp. 155–165, 2004. [Google Scholar]

5. Y. H. Zhai, J. J. Qi, D. Y. Li, C. Zhang and W. H. Xu, “The structure theorem of three-way concept lattice,” International Journal of Approximate Reasoning, vol. 146, no. 12, pp. 157–173, 2022. [Google Scholar]

6. X. Y. Chen, J. J. Qi, X. M. Zhu, X. Wang and Z. Wang, “Unlabelled text mining methods based on two extension models of concept lattices,” International Journal of Machine Learning and Cybernetics, vol. 11, no. 2, pp. 475–490, 2020. [Google Scholar]

7. A. Houari, W. Ayadi and S. B. Yahia, “A new FCA-based method for identifying biclusters in gene expression data,” International Journal of Machine Learning and Cybernetics, vol. 9, no. 11, pp. 1879–1893, 2018. [Google Scholar]

8. W. H. Xu and W. T. Li, “Granular computing approach to two-way learning based on formal concept analysis in fuzzy datasets,” IEEE Transactions on Cybernetics, vol. 46, no. 2, pp. 366–379, 2016. [Google Scholar] [PubMed]

9. C. F. Zou, H. F. Deng, J. F. Wan, Z. R. Wang and P. Deng, “Mining and updating association rules based on fuzzy concept lattice,” Future Generation Computer Systems, vol. 82, pp. 698–706, 2018. [Google Scholar]

10. A. Mouakher and S. B. Yahia, “On the efficient stability computation for the selection of interesting formal concepts,” Information Sciences, vol. 472, pp. 15–34, 2019. [Google Scholar]

11. C. Carpineto and G. Romano, “A lattice conceptual clustering system and its application to browsing retrieval,” Machine Learning, vol. 24, no. 2, pp. 95–122, 1996. [Google Scholar]

12. W. Y. Bao, C. C. Shen and Y. Li, “Collaborative information retrieval framework based on FCA,” in 2015 12th Web Information System and Application Conf. (WISA), Jinan, China, pp. 175–178, 2015. [Google Scholar]

13. H. L. Zhi, J. J. Qi, T. Qian and R. S. Ren, “Conflict analysis under one-vote veto based on approximate three-way concept lattice,” Information Sciences, vol. 516, no. 2, pp. 316–330, 2020. [Google Scholar]

14. H. L. Zhi, J. H. Li and Y. N. Li, “Multilevel conflict analysis based on fuzzy formal contexts,” IEEE Transactions on Fuzzy Systems, vol. 30, no. 12, pp. 5128–5142, 2022. [Google Scholar]

15. G. M. Lang and Y. Y. Yao, “Formal concept analysis perspectives on three-way conflict analysis,” International Journal of Approximate Reasoning, vol. 152, no. 407, pp. 160–182, 2023. [Google Scholar]

16. C. F. Zou, D. Q. Zhang, J. F. Wan, M. M. Hassan and J. Lloret, “Using concept lattice for personalized recommendation system design,” IEEE Systems Journal, vol. 11, no. 1, pp. 305–314, 2017. [Google Scholar]

17. H. Mezni and T. Abdeljaoued, “A cloud services recommendation system based on fuzzy formal concept analysis,” Data Knowledge Engineering, vol. 116, no. 4, pp. 100–123, 2018. [Google Scholar]

18. W. Q. Huang, F. Hao, G. Y. Pang and Y. F. Sun, “Complementary context-enhanced concept lattice aware personalized recommendation,” in 2021 IEEE 20th Int. Conf. on Trust, Security and Privacy in Computing and Communications, Shenyang, China, pp. 919–926, 2021. [Google Scholar]

19. K. S. Qu, Y. H. Zhai, J. Y. Liang and M. Chen, “Study of decision implications based on formal concept analysis,” International Journal of General Systems, vol. 36, no. 2, pp. 147–156, 2007. [Google Scholar]

20. Y. H. Zhai, D. Y. Li and K. S. Qu, “Decision implications: A logical point of view,” International Journal of Machine Learning and Cybernetics, vol. 5, no. 4, pp. 509–516, 2014. [Google Scholar]

21. Y. H. Zhai, N. Jia, S. X. Zhang, D. Y. Li and W. H. Xu, “Study on deduction process and inference methods of decision implications,” International Journal of Machine Learning and Cybernetics, vol. 13, no. 7, pp. 1959–1979, 2022. [Google Scholar]

22. Y. H. Zhai, D. Y. Li and K. S. Qu, “Decision implication canonical basis: A logical perspective,” Journal of Computer System Sciences, vol. 81, no. 1, pp. 208–218, 2015. [Google Scholar]

23. D. Y. Li, S. X. Zhang and Y. H. Zhai, “Method for generating decision implication canonical basis based on true premises,” International Journal of Machine Learning and Cybernetics, vol. 8, no. 1, pp. 57–67, 2017. [Google Scholar]

24. S. X. Zhang, D. Y. Li and Y. H. Zhai, “Incremental method of generating decision implication canonical basis,” Soft Computing, vol. 26, no. 3, pp. 1067–1083, 2022. [Google Scholar]

25. J. H. Li, C. L. Mei, C. A. Kumar and X. Zhang, “On rule acquisition in decision formal contexts,” International Journal of Machine Learning and Cybernetics, vol. 4, no. 6, pp. 721–731, 2013. [Google Scholar]

26. W. Z. Wu, Y. Leung and J. S. Mi, “Granular computing and knowledge reduction in formal contexts,” IEEE Transactions on Knowledge and Data Engineering, vol. 21, no. 10, pp. 1461–1474, 2009. [Google Scholar]

27. K. Y. Qin, B. Li and Z. Pei, “Attribute reduction and rule acquisition of formal decision context based on object (property) oriented concept lattices,” International Journal of Machine Learning and Cybernetics, vol. 10, no. 10, pp. 2837–2850, 2019. [Google Scholar]

28. J. P. Xie, M. H. Yang, J. H. Li and Z. Zheng, “Rule acquisition and optimal scale selection in multi-scale formal decision contexts and their applications to smart city,” Future Generation Computer Systems, vol. 83, no. 3, pp. 564–581, 2018. [Google Scholar]

29. H. L. Zhi and Y. F. Ma, “Fuzzy association rule acquisition based on granule description,” Chinese Journal of Kunming University of Science and Technology (Natural Science), vol. 47, no. 5, pp. 61–6877, 2022. [Google Scholar]

30. Q. Hu and K. Y. Qin, “Attribute reduction and rule acquisition of formal decision context based on dual concept lattice,” in 19th Int. Conf. on Web-Based Learning and 5th Int. Symp. on Emerging Technologies for Education, Ningbo, China, pp. 117–129, 2021. [Google Scholar]

31. S. X. Zhang, D. Y. Li, Y. H. Zhai and X. P. Kang, “A comparative study of decision implication, concept rule and granular rule,” Information Sciences, vol. 508, no. 1, pp. 33–49, 2020. [Google Scholar]

32. L. A. Zadeh, “Toward a theory of fuzzy information granulation and its centrality in human reasoning and fuzzy logic,” Fuzzy Sets and Systems, vol. 90, no. 2, pp. 111–127, 1997. [Google Scholar]

33. X. P. Kang and D. Q. Miao, “A study on information granularity in formal concept analysis based on concept-bases,” Knowledge-Based Systems, vol. 105, pp. 147–159, 2016. [Google Scholar]

34. X. P. Kang, D. Y. Li and S. G. Wang, “Research on domain ontology in different granulations based on concept lattice,” Knowledge-Based Systems, vol. 27, no. 3, pp. 152–161, 2012. [Google Scholar]

35. R. Belohlavek, B. D. Baets and J. Konecny, “Granularity of attributes in formal concept analysis,” Information Sciences, vol. 260, no. 4, pp. 149–170, 2014. [Google Scholar]

36. Y. Wan and L. G. Zou, “An efficient algorithm for decreasing the granularity levels of attributes in formal concept analysis,” IEEE Access, vol. 7, pp. 11029–11040, 2019. [Google Scholar]

37. L. G. Zou, Z. P. Zhang and J. Long, “An efficient algorithm for increasing the granularity levels of attributes in formal concept analysis,” Expert Systems with Applications, vol. 46, no. 3, pp. 224–235, 2016. [Google Scholar]

38. M. W. Shao, M. M. Lv, K. Li and Z. Z. Wang, “The construction of attribute (object)-oriented multi-granularity concept lattices,” International Journal of Machine Learning and Cybernetics, vol. 11, no. 5, pp. 1017–1032, 2020. [Google Scholar]

39. L. Wei and Q. Wan, “Granular transformation and irreducible element judgment theory based on pictorial diagrams,” IEEE Transactions on Cybernetics, vol. 46, no. 2, pp. 380–387, 2016. [Google Scholar] [PubMed]

40. J. J. Qi, L. Wei and Q. Wan, “Multi-level granularity in formal concept analysis,” Granular Computing, vol. 4, no. 3, pp. 351–362, 2019. [Google Scholar]

41. Q. Hu and K. Y. Qin, “The construction of multi-granularity concept lattices,” Journal of Intelligent & Fuzzy Systems, vol. 39, no. 3, pp. 2783–2790, 2020. [Google Scholar]

42. L. K. Guo, F. P. Huang, Q. G. Li and G. Q. Zhang, “Power contexts and their concept lattices,” Discrete Mathematics, vol. 311, no. 18–19, pp. 2049–2063, 2011. [Google Scholar]

43. K. Chen, H. Che, M. F. Leung and Y. Wang, “An improved superpixel-based fuzzy C-Means method for complex picture segmentation tasks,” in 2022 14th Int. Conf. on Advanced Computational Intelligence (ICACI), Wuhan, China, pp. 231–238, 2022. [Google Scholar]


Cite This Article

APA Style
Zhai, Y., Chen, R., Li, D. (2023). An update method of decision implication canonical basis on attribute granulating. Intelligent Automation & Soft Computing, 37(2), 1833-1851. https://doi.org/10.32604/iasc.2023.039553
Vancouver Style
Zhai Y, Chen R, Li D. An update method of decision implication canonical basis on attribute granulating. Intell Automat Soft Comput . 2023;37(2):1833-1851 https://doi.org/10.32604/iasc.2023.039553
IEEE Style
Y. Zhai, R. Chen, and D. Li, “An Update Method of Decision Implication Canonical Basis on Attribute Granulating,” Intell. Automat. Soft Comput. , vol. 37, no. 2, pp. 1833-1851, 2023. https://doi.org/10.32604/iasc.2023.039553


cc Copyright © 2023 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 715

    View

  • 476

    Download

  • 1

    Like

Share Link