A randomized double-blind controlled trial of automated term dissection

OBJECTIVE: To compare the accuracy of an automated mechanism for term dissection to represent the semantic dependencies within a compositional expression, with the accuracy of a practicing Internist to perform this same task. We also compare the results of four evaluators to determine the inter-obse...

Ausführliche Beschreibung

Bibliographische Detailangaben
Veröffentlicht in:Proceedings. AMIA Symposium. - 1998. - (1999) vom: 23., Seite 62-6
1. Verfasser: Elkin, P L (VerfasserIn)
Weitere Verfasser: Bailey, K R, Ogren, P V, Bauer, B A, Chute, C G
Format: Aufsatz
Sprache:English
Veröffentlicht: 1999
Zugriff auf das übergeordnete Werk:Proceedings. AMIA Symposium
Schlagworte:Clinical Trial Comparative Study Journal Article Randomized Controlled Trial Research Support, U.S. Gov't, P.H.S.
LEADER 01000naa a22002652 4500
001 NLM10496314X
003 DE-627
005 20231222133638.0
007 tu
008 231222s1999 xx ||||| 00| ||eng c
028 5 2 |a pubmed24n0350.xml 
035 |a (DE-627)NLM10496314X 
035 |a (NLM)10566321 
040 |a DE-627  |b ger  |c DE-627  |e rakwb 
041 |a eng 
100 1 |a Elkin, P L  |e verfasserin  |4 aut 
245 1 2 |a A randomized double-blind controlled trial of automated term dissection 
264 1 |c 1999 
336 |a Text  |b txt  |2 rdacontent 
337 |a ohne Hilfsmittel zu benutzen  |b n  |2 rdamedia 
338 |a Band  |b nc  |2 rdacarrier 
500 |a Date Completed 01.02.2000 
500 |a Date Revised 10.12.2019 
500 |a published: Print 
500 |a Citation Status MEDLINE 
520 |a OBJECTIVE: To compare the accuracy of an automated mechanism for term dissection to represent the semantic dependencies within a compositional expression, with the accuracy of a practicing Internist to perform this same task. We also compare the results of four evaluators to determine the inter-observer variability and the variance between term sets, with respect to the accuracy of the mappings and the consistency of the failure analysis 
520 |a METHODS: 500 terms, which required a compositional expression to effect an exact match, were randomly distributed into two sets of 250 terms (Set A and Set B). Set A was dissected using the Automated Term Dissection (ATD) Algorithm. A physician specializing in Internal Medicine dissected set B. He had no prior knowledge of the dissection algorithm or how it functioned. In this manuscript, the authors use Human Term Dissection (HTD) to refer to this method. Set A was randomized to two sets of 125 terms (Set A1 and Set A2). Set B was randomized to two sets of 125 terms (Set B1 and Set B2). A new set of 250 terms Set C was created from Set A1 and Set B2. A second new set of 250 terms Set D was created from Set A2 and Set B1. Two expert Indexers reviewed Set C and another two expert Indexers reviewed Set D. They were blinded to which terms were dissected by the clinician and which terms were dissected by the automated term dissection algorithm. The person providing the files for review to the Indexers was also unaware of which terms were dissected by ATD vs. the HTD method. The Indexers recorded whether or not the dissection was the best possible representation of the input concept. If not, a failure analysis was conducted. They recorded whether or not the dissection was in error and if so was a modifier not subsumed or was a Kernel concept subsumed when it should not have been. If a concept was missing, the Indexers recorded whether it was a Kernel concept, a modifier, a qualifier or a negative qualifier 
520 |a RESULTS: The ATD method was judged to be accurate and readable in 265 out of the 424 terms with adequate content (62.7%). The HTD method was judged to be accurate in 272 out of 414 terms with adequate content (65.7%). There was no statistically significant difference between the rates of acceptability of the ATD and HTD methods (p = 0.33). There was a non-significant trend toward greater acceptability of the ATD method in the subgroup of terms with three or more compositional elements. ATD was acceptable in 53.6% of the terms where the HTD was only acceptable in 43.6% (p = 0.11). The failure analysis showed that both methods misrepresented kernel concepts and modifiers much more commonly than qualifiers (p < 0.001) 
520 |a CONCLUSIONS: There is no statistically significant difference in the accuracy and readability of terms dissected using the automated term dissection method when compared with human term dissection, as judged by four expert medical indexers. There is a non-significant trend toward improved performance of the ATD method in the subset of more complex terms. The authors submit that this may be due to a tendency for users to be less compulsive when the time to complete the task is long. Automated term dissection is a useful and perhaps preferable method for representing readable and accurate compound terminological expressions 
650 4 |a Clinical Trial 
650 4 |a Comparative Study 
650 4 |a Journal Article 
650 4 |a Randomized Controlled Trial 
650 4 |a Research Support, U.S. Gov't, P.H.S. 
700 1 |a Bailey, K R  |e verfasserin  |4 aut 
700 1 |a Ogren, P V  |e verfasserin  |4 aut 
700 1 |a Bauer, B A  |e verfasserin  |4 aut 
700 1 |a Chute, C G  |e verfasserin  |4 aut 
773 0 8 |i Enthalten in  |t Proceedings. AMIA Symposium  |d 1998  |g (1999) vom: 23., Seite 62-6  |w (DE-627)NLM098642928  |x 1531-605X  |7 nnns 
773 1 8 |g year:1999  |g day:23  |g pages:62-6 
912 |a GBV_USEFLAG_A 
912 |a SYSFLAG_A 
912 |a GBV_NLM 
912 |a GBV_ILN_350 
951 |a AR 
952 |j 1999  |b 23  |h 62-6