Comprehensive Stereotype Content Dictionaries Using A Semi‐automated Method

1y ago
10 Views
2 Downloads
1.18 MB
19 Pages
Last View : 2m ago
Last Download : 2m ago
Upload by : Pierre Damon
Transcription

Received: 12 March 2020 Accepted: 29 September 2020DOI: 10.1002/ejsp.2724RESEARCH ARTICLEComprehensive stereotype content dictionaries using a semiautomated methodGandalf Nicolas Xuechunzi Bai Susan T. FiskeDepartment of Psychology, PrincetonUniversity, Princeton, NJ, USACorrespondenceGandalf Nicolas, Department of Psychology,Rutgers University – New Brunswick, TilletHall 607, Piscataway, NJ 08854, USA.Email: gandalf.nicolas@rutgers.eduAbstractAdvances in natural language processing provide accessible approaches to analyzepsychological open-ended data. However, comprehensive instruments for text analysis of stereotype content are missing. We developed stereotype content dictionaries using a semi-automated method based on WordNet and word embeddings. Thesestereotype content dictionaries covered over 80% of open-ended stereotypes aboutsalient American social groups, compared to 20% coverage from words extracteddirectly from the stereotype content literature. The dictionaries showed high levelsof internal consistency and validity, predicting stereotype scale ratings and humanjudgments of online text. We developed the R package Semi-Automated DictionaryCreation for Analyzing Text (SADCAT; https://github.com/ganda lfnic olas/SADCAT) foraccess to the stereotype content dictionaries and the creation of novel dictionariesfor constructs of interest. Potential applications of the dictionaries range from advancing person perception theories through laboratory studies and analysis of onlinedata to identifying social biases in artificial intelligence, social media, and other ubiquitous text sources.KEYWORDSdictionaries, stereotype content, text analysis, word embeddings, WordNetText data are everywhere. Researchers may obtain text data from(see Nicolas & Skinner, 2017), in a free response task participantssources such as the internet, literary collections, archival entries,most frequently indicated perceiving these targets to be Hispanicand experimental psychology's open-ended responses. Comparedor Middle-Eastern (Nicolas et al., 2018). These kinds of online andto traditional response scales in psychological research, embracingopen-ended text data, however, often need some form of dimen-text data allows more unobtrusive and unconstrained approaches tosionality reduction and numerical representation for interpretationmeasurement. For example, social media data provide information(e.g., due to the large number of words that may refer to the sameabout participants’ cognitions, free from demand characteristics as-overarching construct of interest), making text analysis methodssociated with some laboratory studies (see Meshi et al., 2015).necessary.Using open-ended (vs. forced-choice) responses in controlledLanguage and text analysis have a long history in psychologysettings also enables more ecologically valid and data-driven study(e.g., Dewey, 1910; Miller, 1951; see Boyd, 2017 for a review) andof psychological processes and content. These benefits appear inaffiliated fields ranging from Sociology and Political Science (seestudying emotion (Gendron et al., 2015) and racial categorizationLucas et al., 2015) to Computer Science (see Nerbonne, 2003). In so-(Nicolas et al., 2018), challenging previously held findings by employ-cial and personality psychology in particular, numerous studies haveing free-response measures that circumvent researcher constraintsmade use of text analysis to obtain novel insights into human traitson participants’ responses. For example, despite several studiesand behaviors. For example, studies into status differences in lan-showing that Americans categorize Black-White mixed-race faces asguage use have shown that higher (vs. lower) status individuals tendBlack when only allowed to make Black versus White categorizationsto use we more often than I as pronouns (e.g., Kacewicz et al., 2014).178 2020 John Wiley & Sons, Ltd.wileyonlinelibrary.com/journal/ejsp Eur J Soc Psychol. 2021;51:178–196.

Comprehensive stereotype content dictionaries using a semi-automated methodOther studies have applied text analysis to interpersonal relations,179to content as varied as emotion, social relationships, thinking styles,finding for example that members of longer-lasting relationshipsamong others (see Tausczik & Pennebaker, 2010). The original cre-tend to match their linguistic style and use more positive emotionalation and wide usage of the LIWC dictionaries highlights both thatwords (Slatcher & Pennebaker, 2006). Tapping into the vast amountsavailable dictionaries are cheaper and easy to implement, and thatof online data, text analysis in the field even shows promise of pre-the cost of creating new dictionaries in the first place may be pro-dicting health-related behaviors (see Chung & Pennebaker, 2019)hibitively expensive and time-consuming for many researchers.and bringing in more explanation into descriptive frameworks ofFor example, many of the LIWC dictionaries used up to 8 judgespersonality (see Boyd & Pennebaker, 2017).across several stages in order to manually expand the word listsDespite the advantages, creating and validating text analysis in-(Pennebaker et al., 2015).struments such as dictionaries differs considerably from developingGiven that the constructs measured by existing dictionaries aretraditional scales, and currently not many appropriately reviewedinevitably limited compared to the diversity of constructs studiedguidelines exist. As a result, many areas have yet to fully incorporateby psychologists, more accessible methods to facilitate dictionarytext analysis methods into their repertoire. An example is stereo-creation are useful. For example, even topic areas as central to socialtyping, which despite being one of the largest research areas withinpsychology as stereotype content (see Fiske et al., 2010) lack com-social psychology, suffers from a dearth of specialized text analysisprehensive specialized instruments for their measurement in text, anmethods and literature that may support new avenues of researchissue we sought to address in the current paper.(reviewed below).1 CU R R E NT A PPROAC H E S TO TE X TA N A LYS I S I N P S YC H O LO G Y2 S TE R EOT Y PE CO NTE NTStereotypes are beliefs about social groups and are encoded andshaped through language (Maass, 1999). In fact, a myriad of linguis-Recently, advances in natural language processing in machine learn-tic factors matter in the perception of social groups: from the naturaling allow easier extraction of information about psychologicallanguage itself (e.g., the presence of grammatical gender in a lan-processes and content. The most common method to analyze textguage affecting gender representations; see Sato et al., 2013), to thedata in psychology has traditionally been human coding. In this ap-concreteness of the language used (e.g., ingroups being describedproach, each text is evaluated by a group of human judges in termsmore abstractly than outgroups when performing positive actions;of how much it reflects a construct of interest. Measures of agree-Maass et al., 1989; Semin & Fiedler, 1992), to the part of speechment between human judges often document reliability. Evidently,used (e.g., group cues presented through nouns rather than adjec-however, this approach is time-consuming and resource-demanding,tives lead to stronger stereotype-congruent inferences; Carnaghiand these limitations rapidly worsen the more data that need to beet al., 2008). The study of stereotypes through explicit person de-coded (Iliev et al., 2015). Furthermore, this approach for text analysisscriptions, in particular, has one of the longest traditions within so-lacks standardization—that is, judges coding may vary across studiescial psychology (Bergsieker et al., 2012, Study 4; Katz & Braly, 1933).or laboratories.However, to date, no comprehensive instruments for the analysis ofAn increasingly popular alternative to per-study human coding oftext data have been developed in the area. This article provides suchtext is offered by dictionaries (see Iliev et al., 2015). Dictionaries listan instrument for measuring several relevant dimensions of content.words that are indicators of the construct of interest. Once created,The stereotype content model (SCM; Fiske et al., 2002), a well-dictionaries are a standardized approach for coding text data, acrossknown current framework, proposes that people primarily use twostudies, without additional human judge intervention. For this rea-dimensions to think about individuals and groups: warmth (i.e., isson, they are also less resource-intensive and time-consuming forthis target a friend or a foe?) and competence (i.e., can this targetusers. Dictionaries are also easy to use in analysis (vs. some more ad-act on their intentions?). A large body of research has corrobo-vanced natural language processing methods). The analysis processrated that evaluations along these dimensions occur cross-cultur-most often consists of counting the number of words in a text thatally (Fiske, 2018). The combination of the two core dimensions alsoare included in the dictionary. The larger the number of words frompredicts intergroup emotions and behavioral tendencies (Cuddythe dictionary that are present in the text, the higher the score foret al., 2007).the construct of interest measured by the instrument. To illustrate,More recent models of stereotype content have either definedif evaluating the positivity of a particular text (e.g., a self-description,different facets of warmth and competence, or proposed novel,or a diary entry), a researcher would count the number of words thatdistinct dimensions of stereotype content. For example, Abelefall into a positive valence dictionary (e.g., “good,” “nice,” “amazing”)and colleagues (2016) suggest subdividing Warmth (also calledas a measure of the constructs.Communion) into friendliness/sociability and morality facets andThe most widely used set of dictionaries in psychology and akinCompetence (also called agency) into ability and assertiveness (seeareas is the Linguistic Inquiry and Word Count (LIWC; Pennebakeralso Ellemers, 2017; Goodwin, 2015). The recent Agency-Beliefs-et al., 2015). LIWC has been the benchmark for studying text relatedCommunion model (ABC; Koch et al., 2016) introduces beliefs (i.e.,

180 NICOLAS et al.religious-secular beliefs and political orientation) and status. Thus,extracted from large online corpora (e.g., social media) may some-stereotype content dimensions are still contested, and open-endedtimes function more like implicit than explicit stereotypes mea-data could shed some light on this issue.sured in traditional laboratory scales (Kurdi et al., 2019). Thesekinds of theoretical advances are greatly facilitated, and some-3 TE X T A N A LYS I S I N S TE R EOT Y PECO NTE NTtimes only possible, through the use of automated methods dependent on the existence of valid stereotype content dictionaries.In fact, the simple exploration of the structure of dictionaries inthis article may provide some insights into the structure of stereo-The stereotype content literature has so far largely relied on tra-types in natural language, as we briefly explore.ditional metrics of measurement, in particular Likert-type scalesIn this article, we introduce novel stereotype content dictionar-measuring how much a social group allegedly possesses a par-ies that fill a void in the study of stereotyping in text. We developticular dimension of content. A couple of studies (Decter-Frain &these dictionaries using an approach (incorporating some naturalFrimer, 2016; Dupree & Fiske, 2019) have used some LIWC dic-language processing methods in novel ways) that is described in thetionaries to measure warmth (e.g., the family and friend dictionar-text and made available through an R package for other researchersies) and competence (e.g., the work and achievement dictionaries).to use when developing dictionaries for other constructs of their in-However, because these dictionaries were not designed to meas-terest. Finally, we use traditional and emergent techniques to eval-ure those constructs, they may cover both a small subset of ap-uate the coverage, reliability, and validity of the dictionaries. Thispropriate words and correlated constructs rather than the targetnew approach provides a complementary way to automatize manyconcepts. For example, the LIWC affiliation dictionary includesprocesses, in order to facilitate new dictionaries that are also lesswords such as friend or friendly, but not low-directional antonymscoder-reliant, may handle more words, and address distinctive top-(e.g., enemy or unfriendly). In the absence of a specialized indica-ics, among other benefits. We make available helper functions usedtor or separate dictionary for the antonyms of these dimensionalto create dictionaries using this approach in the R package Semi-constructs, text data that include responses along the whole di-Automated Dictionary Creation for Analyzing Text (SADCAT), availablemension (such as stereotypes) will suffer from lack of coverageat https://github.com/ganda lfnic olas/SADCAT. The package alsoor loss of information, depending on the application. Finally, acontains functions to code text into the stereotype content dictio-recent study (Pietraszkiewicz et al., 2018) developed dictionariesnaries developed here. All data and code for the analyses presentedof communion (similar to warmth) and agency (similar to compe-here are also available at https://osf.io/yx45f/ ?.tence) using the LIWC development approach, but these includedonly a subset of possible words (e.g., only high directional), did notprovide explicit indicators for the different facets of these dimensions, and did not cover other stereotype dimensions.4 COV E R AG E , R E LI A B I LIT Y, A N DVA LI D IT YFor an area such as stereotype content, where responses gobeyond a small set of categories, to a large number of possibleDictionary creation aimed to achieve three indicators of quality: cov-nouns and adjectives, developing a more comprehensive instru-erage, internal reliability, and convergent validity.ment becomes even more vital to faithfully characterizing textcontent. Potentially, this instrument could expand current theoretically derived models of social cognition by exploring open-ended4.1 Coverageresponses in controlled experiments, in addition to examining stereotype content in multiple untapped sources of text data online.A traditional psychological scale can measure a construct with a fewFor example, Fiske et al. (in press) argue that stereotypes obtaineditems sampled from a larger pool of intercorrelated items withoutthrough open-ended and text measures may differ from tradi-wasting any data. However, with text measures, where participantstional scale-based stereotypes, providing information into whichchoose the items (i.e., words) they wish to convey about the con-stereotypes are more central to social groups’ representation andstruct, a larger pool of items is needed to code the participants’ re-improving predictive models of discrimination arising from stereo-sponses. Coverage refers to the proportion of possible participanttyping. For example, while traditional scale ratings of Warmth andresponses that is covered by the dictionary (i.e., the pool of items).Competence would place Doctors and Nurses as similarly high onCoverage will be domain-dependent. Thus, our dictionaries aim toboth dimensions, spontaneous text responses (e.g., coded throughexplain a majority of participants’ responses when prompted to pro-dictionaries) suggest that Warmth is more representative of thevide stereotype content of social groups (i.e., stereotypes). Here, westereotype content of Nurses while Competence is more repre-use WordNet (Miller, 1995), a lexical database with semantic rela-sentative of the stereotypes of Doctors. Furthermore, this typetions between words, in order to automatically expand an initial setof information derived from text responses significantly improvesof words into their synonyms, antonyms, etc., to increase the cover-predictions of attitudes toward social groups (see Nicolas et al.,age of open-ended stereotypes provided by a sample of American2020). Others argue that stereotypical explicit person descriptionsrespondents.

Comprehensive stereotype content dictionaries using a semi-automated method4.2 Internal reliability181Rizzo, in press), and these instruments aim to measure their use inexperimental and online settings. However, based on factors suchWe refer to internal reliability as the consistency and intercorrela-as social desirability, text data may instead portray stereotypes im-tions of the pool of items that make up the dictionaries. In otherplicitly or indirectly, or may provide information on the author ratherwords, reliability measures whether words within a dictionary bearthan targets described in text. Our dictionaries may likely be use-higher semantic similarity than words in different dictionaries. Toful for these applications as well (e.g., based on correlations withmeasure semantic similarity, we adapt recent methods in naturaldictionaries more explicitly designed to measure such indicators,language processing (Mikolov et al., 2013; Pennington et al., 2014)Pietraszkiewicz et al., 2018), but future validation of these applica-to generate numeric vector representations of text data. Obtainingtions is necessary.pairwise similarities from these vectors enables calculations of traditional metrics of internal reliability, such as the average inter-item“correlation” (in this case average cosine similarity) or Cronbach's5 D I C TI O N A RY C R E ATI O N OV E RV I E Walpha.Dictionaries evolved through an iterative process that subsequent sec-4.3 Validitytions will explain, and that is summarized in a flowchart in Figure 1. Toanticipate: In Study 1 we identified from the literature words coveringrelevant stereotype and person perception dimensions, forming an ini-Validity is relatively straightforward as it is most similar to scales.tial set of seed words dictionaries. In Study 2 we collected stereotypeUsing the dictionaries allows us to code the construct of interestcontent text data to test how much the initial seed words accounted forfrom participants’ responses, which may then correlate with otherparticipants’ responses (i.e., coverage). In Study 3, we used WordNetconstructs expected to be theoretically related (i.e., convergent valid-to expand the seed words to a larger dictionary. We iterated the pro-ity) or unrelated (i.e., divergent validity). Here we test validity againstcess of testing coverage and adding words until we reached a goodmultiple data sources and compare our comprehensive dictionariesproportion of dictionary coverage. After completing the dictionaries,with existing dictionaries used to measure stereotype content in text.in Study 4 we tested dictionary reliability using the similarity metricsWe note that the current dictionaries are validated for the do-discussed above. Finally, we tested the validity of the dictionaries inmain of explicit person descriptions. This is one of the most relevantfour ways. First, we explored the convergent and discriminant validityand widely studied topics in social psychology, ranging from studiesof our dictionaries in comparison to existing dictionaries that meas-on social group stereotypes (see Fiske et al., 2010) to face impres-ure related constructs (Study 5). Second, we tested validity in relationsions (see Todorov, 2017). Explicit and blatant stereotyping is veryto scale ratings, that is, whether experimentally requested responsesmuch alive and widespread (e.g., Kteily & Bruneau, 2017; Roberts &coded with our dictionaries correlated with stereotypes measured byFIGURE 1Flowchart and summary of procedures and results

182 TA B L E 1NICOLAS et al.Example words for each seed PowerlessLowStatusSuperiorHigh(Continues)

Comprehensive stereotype content dictionaries using a semi-automated methodTA B L E Lowscales (Study 6). Then, we tested validity in relation to human ratings,6.1 Methodsthat is, whether human coders identified the semantic meaning of eachdictionary from a small subset of its items (Study 7). Finally, we usedWe reviewed the literature (Abele et al., 2008, 2016; Fiskethe dictionaries to code for real-world data and correlate these withet al., 2002; Koch et al., 2016; Oosterhof & Todorov, 2008;human coding along the dimensions (Study 8).Wojciszke et al., 2011) for lists of words used to measure friendli-In these studies, we report all measures, manipulations and ex-ness/sociability, morality/trustworthiness, ability, assertiveness/clusions. Depending on the within-subject variance, power analysesdominance, status, political beliefs, and religious beliefs in relationfor all studies reveal over 80% power to detect small effects of r or fto social groups. For every word, if not already included, we alsobetween 0.1 and 0.2 in our main tests. Sample size was determinedobtained its antonym.before any data analysis. All validity studies with human subjectswere approved by the University ethics committee, and adhered tothe ethical guidelines specified in the APA Code of Conduct and the6.2 ResultsUS Federal Policy for the Protection of Human Subjects (includinginformed consent, right to withdraw, and debriefing).The final seed dictionaries consisted of 341 distinct words, withtheir corresponding theoretical direction (i.e., high or low on their6 S T U DY 1: C R E ATI N G S E E DD I C TI O N A R I E Scorresponding dimension, based on how they were labeled in thereviewed literature). See Table 1 for example words; for a full listsee online repository. Because words can have multiple senses(e.g., warm can refer to psychological or physical warmth) the re-In Study 1, we identify stereotype content dimensions that havesearchers independently went through the list of seed words andbeen previously formalized in stereotype content models and createdecided on the most appropriate sense(s), based on their part ofinitial, theory-driven seed dictionaries.speech, definition, and example sentences, which resulted in a list

184 of 455 senses. The final senses were those which two of the re-NICOLAS et al.7.2 Resultssearchers agreed on, 90% of the total senses selected by either ofthe two researchers.As expected, the words used in the existing literature to describeDictionaries were mostly balanced in terms of high and lowcontent dimensions were not a good measure of the diversity ofsenses, but there were some slight imbalances such as more low (vs.open-ended responses, accounting for only 20.2% of our devel-high) Morality words and more high (vs. low) Ability words. Note thatopment data (6.2% of distinct responses). Mapping the content ofby high and low we do not mean valence: It is simply an indicator ofspontaneous stereotypes requires accounting for most of the re-which end of the antonymy dimension the word refers to; whethersponses. However, open-ended responses allow for any number ofone or the other antonym is coded as high versus low is arbitrary.synonymous terms that have not been exhaustively listed in previ-For example, we coded beliefs as ranging from progressive to tra-ous studies. For instance, even though we were able to find in theditional, and thus high direction in this dictionary means that theliterature words such as thief referring to morality, other synonymsword is more about traditional beliefs than progressive beliefs. Forsuch as robber were absent. For this reason, in the next study wemore information about the seed dictionaries please refer to theexpand the dictionaries using WordNet to improve coverage.Supplement.6.3 Summary7.3 SummaryIn this study we tested how many spontaneous stereotypesIn an initial theory-driven and human-dependent step, we col-provided by a sample of American participants in response to alected from the literature small dictionaries containing seed wordssalient sample of social groups were covered by our seed dic-for the constructs of stereotype content. These seed word diction-tionaries. This coverage was very low, meaning that deployingaries would be expanded in subsequent steps to obtain the finalthe seed dictionaries to analyze laboratory or online text datainstruments.would result in large amounts of missing data and undercountingof construct-relevant words. In the next study we address this7 S T U DY 2 : S E E D D I C TI O N A R I E SCOV E R AG EIn Study 2 we perform an initial test of coverage on developmentlimitation.8 S T U DY 3 : E X PA N S I O N A N D FI N A LD I C TI O N A R I E Sdata. That is, we explore how many of participants’ open-ended stereotypes about salient U.S. social groups are accounted for by ourGiven the low coverage of the seed dictionaries, in Study 3, we useseed dictionaries.WordNet (Miller, 1995) to automatically expand the seed dictionaries and improve coverage.7.1 Methods8.1 MethodsDevelopment data allowed for initial tests of coverage and validity of the dictionaries. The development data consisted of a sur-Although one could manually gather many words using suggestionsvey (N 201, Mage 37.8, 55% female; 85% White, 6% Black, 3%from field experts, that labor- and time-consuming method wouldHispanic, 3% Asian) asking for participants’ spontaneous thoughtsbe limiting. WordNet offers one automated way to obtain a largeabout characteristics that different social groups would have. Wepool of items by adding words that are semantically associated withused a total of 20 social groups (e.g., “Asian”, “Elderly”, “Wealthy”),a smaller pool of seed words obtained from the literature. WordNetsampled from the literature, and showed five to each participant,(Miller, 1995) is a large lexical database for the English language. Thein random order. Participants provided 10 open-ended single-worddatabase contains metadata about English words, including part-of-responses for each target. Next, participants saw the same socialspeech (i.e., noun, adjective, verb, and adjective), glosses (i.e., shortgroups again and rated them on warmth (items: friendly, sincere) anddefinitions), and usage examples in sentences. Most importantly,competence (items: efficient, competent) using a scale ranging fromWordNet distinguishes words’ different senses (e.g., warmth may refer1 (not at all) to 5 (extremely), as well as a measure of familiarity withto both psychological warmth and temperature), and these sensesthe social group. Finally, participants completed some demographicthen associate with other words/senses through several relationsquestions.such as synonyms and antonyms. Previous research in other fieldsThe open-ended responses were preprocessed (e.g., lowercased, deleted grammatical signs; see Supplement).has used WordNet to expand dictionaries (e.g., Maks et al., 2014).Here, we apply the procedure to the creation of Stereotype Content

Comprehensive stereotype content dictionaries using a semi-automated methodTA B L E 2185Dictionary .060.05Beliefs - ing602NANA0.01Ordinariness14752880.17Body part390NANABody 5NANA0.090.17Body al groupsLacks knowledgeFortuneNote: Words are the original words obtained, including different forms of a word (e.g., plural and singular) while preprocessed words collapse acrossthese by lemmatizing, deleting symbols, among others (see devel

the cost of creating new dictionaries in the first place may be pro-hibitively expensive and time-consuming for many researchers. For example, many of the LIWC dictionaries used up to 8 judges across several stages in order to manually expand the word lists (Pennebaker et al., 2015). Given that the constructs measured by existing dictionaries are

Related Documents:

Stereotype Threat in Organizations 4 has been shown to cause behaviors which parallel the relevant stereotype. To clarify, Steele (Steele, 1997) wrote that stereotype threat is “the event of a negative stereotype about a group to

Dictionaries! Dictionaries are Pythonʼs most powerful data collection! Dictionaries allow us to do fast database-like operations in Python! Dictionaries have different names in different languages! Associative Arrays - Perl / Php! Properties or Map or HashMap - Java! Property Bag - C# / .Net!

Revue InteRnatIonale de PSycholoGIe SocIale 2014 n 3-4 81 computer science courses. We hypothesize that female teachers may be particularly important compared to male teachers in situ-ations that activate stereotype threat for girls. Stereotype threat and teacher gender Stereotype threat refers to students’ concerns about being

stereotypes. An individual can ,construct a purely personal, idiosyncratic stereotype of a group.· For example, Jim might form a stereotype of Finnish-Americans as dishonest, perhaps based on some experience he has had with a few Finnish-Americans. Jim's image of Finnish-Americans as dishon.est functions as a stereotype for him.

The AUPN Package contains all dictionaries that comprise the ‘clinical repository’ or Patient Care Component (PCC). The dictionaries in this packageare the primary files in which patient medical and registration data is housed. This package also contains various utilities and routines called from these dictionaries.

The American heritage guide to contemporary usage and style, p. cm. ISBN-13: 978--618-60499-9 ISBN-10: -618-60499-5 1. English language—Usage—Dictionaries. 2. English language—Style—Dictionaries. 3. English language—United States—Usage—Dictionaries. 4. English language—United States—Style—Dictionaries. I. Houghton Mifflin .

IJL Dictionaries: usefulness and usability 4 seen dictionaries as an essential contribution to work on endangered languages, and as such they have been mainly concerned with the task of preserving languages for future study or revival. In the past the main audience for dictionaries of these languages has been felt to be linguists and other

ASTM C 1628 06ASTM C 1628 06 Standard Specification for Joints for Concrete Gravity Flow Sewer Pipe Using Rubber Gaskets AS C 16 09ASTM C 1677 09 Standard Specification for Joints for Concrete Box, Using Rubber Gaskets ASTM C 1619 05 Standard Specification for Elastomeric Seals for Joining Concrete Structures ASTM C 505 05a Standard Specification for Irrigation Pipe with Rubber Gasket Joints .