WEKO3
アイテム
{"_buckets": {"deposit": "a855506f-5cf8-45a8-b335-bdcb5e56c9bb"}, "_deposit": {"created_by": 13, "id": "10081", "owners": [13], "pid": {"revision_id": 0, "type": "depid", "value": "10081"}, "status": "published"}, "_oai": {"id": "oai:uec.repo.nii.ac.jp:00010081", "sets": ["6"]}, "author_link": ["26949", "26950", "26951", "26952"], "control_number": "10081", "item_10001_biblio_info_7": {"attribute_name": "書誌情報", "attribute_value_mlt": [{"bibliographicIssueDates": {"bibliographicIssueDate": "2021-06", "bibliographicIssueDateType": "Issued"}, "bibliographicIssueNumber": "02", "bibliographicPageEnd": "240", "bibliographicPageStart": "215", "bibliographicVolumeNumber": "15", "bibliographic_titles": [{"bibliographic_title": "International Journal of Semantic Computing", "bibliographic_titleLang": "en"}]}]}, "item_10001_description_5": {"attribute_name": "抄録", "attribute_value_mlt": [{"subitem_description": "In the modern world, several areas of our lives can be improved, in the form of diverse additional dimensions, in terms of quality, by machine learning. When building machine learning models, open data are often used. Although this trend is on the rise, the monetary losses since the attacks on machine learning models are also rising. Preparation is, thus, believed to be indispensable in terms of embarking upon machine learning. In this field of endeavor, machine learning models may be compromised in various ways, including poisoning attacks. Assaults of this nature involve the incorporation of injurious data into the training data rendering the models to be substantively less accurate. The circumstances of every individual case will determine the degree to which the impairment due to such intrusions can lead to extensive disruption. A modus operandi is proffered in this research as a safeguard for machine learning models in the face of the poisoning menace, envisaging a milieu in which machine learning models make use of data that emanate from numerous sources. The information in question will be presented as training data, and the diversity of sources will constitute a barrier to poisoning attacks in such circumstances. Every source is evaluated separately, with the weight of each data component assessed in terms of its ability to affect the precision of the machine learning model. An appraisal is also conducted on the basis of the theoretical effect of the use of corrupt data as from each source. The extent to which the subgroup of data in question can undermine overall accuracy depends on the estimated data removal rate associated with each of the sources described above. The exclusion of such isolated data based on this figure ensures that the standard data will not be tainted. To evaluate the efficacy of our suggested preventive measure, we evaluated it in comparison with the well-known standard techniques to assess the degree to which the model was providing accurate conclusions in the wake of the change. It was demonstrated during this test that when the innovative mode of appraisal was applied, in circumstances in which 17% of the training data are corrupt, the degree of precision offered by the model is 89%, in contrast to the figure of 83% acquired through the traditional technique. The corrective technique suggested by us thus boosted the resilience of the model against harmful intrusion.", "subitem_description_type": "Abstract"}]}, "item_10001_publisher_8": {"attribute_name": "出版者", "attribute_value_mlt": [{"subitem_publisher": "World Scientific Publishing"}]}, "item_10001_relation_14": {"attribute_name": "DOI", "attribute_value_mlt": [{"subitem_relation_type": "isVersionOf", "subitem_relation_type_id": {"subitem_relation_type_id_text": "10.1142/S1793351X21400043", "subitem_relation_type_select": "DOI"}}]}, "item_10001_relation_17": {"attribute_name": "関連サイト", "attribute_value_mlt": [{"subitem_relation_type_id": {"subitem_relation_type_id_text": "https://doi.org/10.1142/S1793351X21400043", "subitem_relation_type_select": "DOI"}}]}, "item_10001_source_id_9": {"attribute_name": "ISSN", "attribute_value_mlt": [{"subitem_source_identifier": "1793351X", "subitem_source_identifier_type": "ISSN"}]}, "item_10001_version_type_20": {"attribute_name": "著者版フラグ", "attribute_value_mlt": [{"subitem_version_resource": "http://purl.org/coar/version/c_ab4af688f83e57aa", "subitem_version_type": "AM"}]}, "item_creator": {"attribute_name": "著者", "attribute_type": "creator", "attribute_value_mlt": [{"creatorNames": [{"creatorName": "Chiba, Tomoki", "creatorNameLang": "en"}], "nameIdentifiers": [{"nameIdentifier": "26949", "nameIdentifierScheme": "WEKO"}]}, {"creatorNames": [{"creatorName": "Sei, Yuichi", "creatorNameLang": "en"}], "nameIdentifiers": [{"nameIdentifier": "26950", "nameIdentifierScheme": "WEKO"}]}, {"creatorNames": [{"creatorName": "Tahara, Yasuyuki", "creatorNameLang": "en"}], "nameIdentifiers": [{"nameIdentifier": "26951", "nameIdentifierScheme": "WEKO"}]}, {"creatorNames": [{"creatorName": "Ohsuga, Akihiko", "creatorNameLang": "en"}], "nameIdentifiers": [{"nameIdentifier": "26952", "nameIdentifierScheme": "WEKO"}]}]}, "item_files": {"attribute_name": "ファイル情報", "attribute_type": "file", "attribute_value_mlt": [{"accessrole": "open_date", "date": [{"dateType": "Available", "dateValue": "2022-09-01"}], "displaytype": "detail", "download_preview_message": "", "file_order": 0, "filename": "3.pdf", "filesize": [{"value": "4.4 MB"}], "format": "application/pdf", "future_date_message": "", "is_thumbnail": false, "licensetype": "license_free", "mimetype": "application/pdf", "size": 4400000.0, "url": {"label": "3", "url": "https://uec.repo.nii.ac.jp/record/10081/files/3.pdf"}, "version_id": "8ed8eb5e-3f8b-45e6-9088-cbb2943561a5"}]}, "item_keyword": {"attribute_name": "キーワード", "attribute_value_mlt": [{"subitem_subject": "Adversarial machine learning", "subitem_subject_language": "en", "subitem_subject_scheme": "Other"}, {"subitem_subject": "security", "subitem_subject_language": "en", "subitem_subject_scheme": "Other"}, {"subitem_subject": "defense", "subitem_subject_language": "en", "subitem_subject_scheme": "Other"}, {"subitem_subject": "poisoning attack", "subitem_subject_language": "en", "subitem_subject_scheme": "Other"}, {"subitem_subject": "detection", "subitem_subject_language": "en", "subitem_subject_scheme": "Other"}]}, "item_language": {"attribute_name": "言語", "attribute_value_mlt": [{"subitem_language": "eng"}]}, "item_resource_type": {"attribute_name": "資源タイプ", "attribute_value_mlt": [{"resourcetype": "journal article", "resourceuri": "http://purl.org/coar/resource_type/c_6501"}]}, "item_title": "A Countermeasure Method Using Poisonous Data Against Poisoning Attacks on IoT Machine Learning", "item_titles": {"attribute_name": "タイトル", "attribute_value_mlt": [{"subitem_title": "A Countermeasure Method Using Poisonous Data Against Poisoning Attacks on IoT Machine Learning", "subitem_title_language": "en"}]}, "item_type_id": "10001", "owner": "13", "path": ["6"], "permalink_uri": "https://uec.repo.nii.ac.jp/records/10081", "pubdate": {"attribute_name": "PubDate", "attribute_value": "2022-09-01"}, "publish_date": "2022-09-01", "publish_status": "0", "recid": "10081", "relation": {}, "relation_version_is_last": true, "title": ["A Countermeasure Method Using Poisonous Data Against Poisoning Attacks on IoT Machine Learning"], "weko_shared_id": -1}
A Countermeasure Method Using Poisonous Data Against Poisoning Attacks on IoT Machine Learning
https://uec.repo.nii.ac.jp/records/10081
https://uec.repo.nii.ac.jp/records/100815078cf5e-946a-4225-a591-48e6d654d31f
名前 / ファイル | ライセンス | アクション |
---|---|---|
3 (4.4 MB)
|
|
Item type | 学術雑誌論文 / Journal Article(1) | |||||
---|---|---|---|---|---|---|
公開日 | 2022-09-01 | |||||
タイトル | ||||||
言語 | en | |||||
タイトル | A Countermeasure Method Using Poisonous Data Against Poisoning Attacks on IoT Machine Learning | |||||
言語 | ||||||
言語 | eng | |||||
キーワード | ||||||
言語 | en | |||||
主題 | Adversarial machine learning | |||||
キーワード | ||||||
言語 | en | |||||
主題 | security | |||||
キーワード | ||||||
言語 | en | |||||
主題 | defense | |||||
キーワード | ||||||
言語 | en | |||||
主題 | poisoning attack | |||||
キーワード | ||||||
言語 | en | |||||
主題 | detection | |||||
資源タイプ | ||||||
資源タイプ識別子 | http://purl.org/coar/resource_type/c_6501 | |||||
資源タイプ | journal article | |||||
著者 |
Chiba, Tomoki
× Chiba, Tomoki× Sei, Yuichi× Tahara, Yasuyuki× Ohsuga, Akihiko |
|||||
抄録 | ||||||
内容記述タイプ | Abstract | |||||
内容記述 | In the modern world, several areas of our lives can be improved, in the form of diverse additional dimensions, in terms of quality, by machine learning. When building machine learning models, open data are often used. Although this trend is on the rise, the monetary losses since the attacks on machine learning models are also rising. Preparation is, thus, believed to be indispensable in terms of embarking upon machine learning. In this field of endeavor, machine learning models may be compromised in various ways, including poisoning attacks. Assaults of this nature involve the incorporation of injurious data into the training data rendering the models to be substantively less accurate. The circumstances of every individual case will determine the degree to which the impairment due to such intrusions can lead to extensive disruption. A modus operandi is proffered in this research as a safeguard for machine learning models in the face of the poisoning menace, envisaging a milieu in which machine learning models make use of data that emanate from numerous sources. The information in question will be presented as training data, and the diversity of sources will constitute a barrier to poisoning attacks in such circumstances. Every source is evaluated separately, with the weight of each data component assessed in terms of its ability to affect the precision of the machine learning model. An appraisal is also conducted on the basis of the theoretical effect of the use of corrupt data as from each source. The extent to which the subgroup of data in question can undermine overall accuracy depends on the estimated data removal rate associated with each of the sources described above. The exclusion of such isolated data based on this figure ensures that the standard data will not be tainted. To evaluate the efficacy of our suggested preventive measure, we evaluated it in comparison with the well-known standard techniques to assess the degree to which the model was providing accurate conclusions in the wake of the change. It was demonstrated during this test that when the innovative mode of appraisal was applied, in circumstances in which 17% of the training data are corrupt, the degree of precision offered by the model is 89%, in contrast to the figure of 83% acquired through the traditional technique. The corrective technique suggested by us thus boosted the resilience of the model against harmful intrusion. | |||||
書誌情報 |
en : International Journal of Semantic Computing 巻 15, 号 02, p. 215-240, 発行日 2021-06 |
|||||
出版者 | ||||||
出版者 | World Scientific Publishing | |||||
ISSN | ||||||
収録物識別子タイプ | ISSN | |||||
収録物識別子 | 1793351X | |||||
DOI | ||||||
関連タイプ | isVersionOf | |||||
識別子タイプ | DOI | |||||
関連識別子 | 10.1142/S1793351X21400043 | |||||
関連サイト | ||||||
識別子タイプ | DOI | |||||
関連識別子 | https://doi.org/10.1142/S1793351X21400043 | |||||
著者版フラグ | ||||||
出版タイプ | AM | |||||
出版タイプResource | http://purl.org/coar/version/c_ab4af688f83e57aa |