hh.sePublications
Change search
Link to record
Permanent link

Direct link
Publications (10 of 68) Show all publications
Liu, Y., Liu, A., Xia, Y., Hu, B., Liu, J., Wu, Q. & Tiwari, P. (2024). A Blockchain-Based Cross-Domain Authentication Management System for IoT Devices. IEEE Transactions on Network Science and Engineering, 11(1), 115-127
Open this publication in new window or tab >>A Blockchain-Based Cross-Domain Authentication Management System for IoT Devices
Show others...
2024 (English)In: IEEE Transactions on Network Science and Engineering, E-ISSN 2327-4697, Vol. 11, no 1, p. 115-127Article in journal (Refereed) Published
Abstract [en]

With the emergence of the resource and equipment sharing concept, many enterprises and organizations begin to implement cross-domain sharing of devices, especially in the field of the Internet of Things (IoT). However, there are many problems in the cross-domain usage process of devices, such as access control, authentication, and privacy protection. In this paper, we make the following contributions. First, we propose a blockchain-based cross-domain authentication management system for IoT devices. The sensitive device information is stored in a Merkle tree structure where only the Merkle root is uploaded to the smart contract. Second, a detailed security and performance analysis is given. We prove that our system is secure against several potential security threats and satisfies validity and liveness. Compared to existing schemes, our schemes realize decentralization, privacy, scalability, fast off-chain authentication, and low on-chain storage. Third, we implement the system on Ethereum with varying parameters known as domain number, concurrent authentication request number, and Merkle tree leaf number. Experimental results show that our solution supports the management of millions of devices in a domain and can process more than 10,000 concurrent cross-domain authentication requests, consuming only 5531 ms. Meanwhile, the gas costs are shown to be acceptable. © IEEE

Place, publisher, year, edition, pages
Piscataway, NJ: IEEE Computer Society, 2024
Keywords
Authentication, Blockchains, cross-domain authentication, Internet of Things, IoT device management, Merkle tree, Organizations, Peer-to-peer computing, Scalability, smart contract, Smart contracts
National Category
Computer Sciences
Identifiers
urn:nbn:se:hh:diva-51427 (URN)10.1109/TNSE.2023.3292624 (DOI)2-s2.0-85164391552 (Scopus ID)
Note

Funding: National Key R&D Program of China (Grant Number: 2021YFB2700200); Natural Science Foundation of China (Grant Number: U21A20467, U21B2021, U22B2008, U2241213, 62202027, 61932011, 61972019, 61972018, 61972017, 62172025 and 61932014); Young Elite Scientists Sponsorship Program by CAST (Grant Number: 2022QNRC001); Beijing Natural Science Foundation (Grant Number: M23016, M21031, L222050 and M22038); CCF-Huawei Huyanglin Foundation (Grant Number: CCF-HuaweiBC2021009); Yunnan Key Laboratory of Blockchain Application Technology Open Project (Grant Number: 202105AG070005 and YNB202206)

Available from: 2023-08-17 Created: 2023-08-17 Last updated: 2024-01-16Bibliographically approved
Miao, J., Wang, Z., Wu, Z., Ning, X. & Tiwari, P. (2024). A blockchain-enabled privacy-preserving authentication management protocol for Internet of Medical Things. Expert systems with applications, 237, Part A, Article ID 121329.
Open this publication in new window or tab >>A blockchain-enabled privacy-preserving authentication management protocol for Internet of Medical Things
Show others...
2024 (English)In: Expert systems with applications, ISSN 0957-4174, E-ISSN 1873-6793, Vol. 237, Part A, article id 121329Article in journal (Refereed) Published
Abstract [en]

Over the last decade, with the increasing popularity and usage of the internet of things worldwide, Internet of Medical Things (IoMT) has emerged as a key technology of the modern era. IoMT uses Artificial Intelligence, 5G, big data, edge computing, and blockchain to provide users with electronic medical services. However, it may face several security threats and attacks over an insecure public network. Therefore, to protect sensitive medical data in IoMT, it is necessary to design a secure and efficient authentication protocol. In this study, we propose a privacy-preserving authentication management protocol based on blockchain. The protocol uses a blockchain to store identities and related parameters to assist communication entities in authentication. In addition, the protocol adopts a three-factor authentication method and introduces Chebyshev chaotic map to ensure the security of user login and authentication. Formal security proof and analysis using the random oracle model and Burrows-Abadi-Needham logic show that the proposed protocol is secure. Moreover, we use informal security analysis to demonstrate that the protocol can resist various security attacks. The functional comparison shows that the protocol has high security. Through performance analysis and comparison with other protocols, the proposed protocol can increase computation overhead, communication overhead, and storage overhead by up to 39.8%, 93.6%, and 86.7%,respectively. © 2023 Elsevier Ltd

Place, publisher, year, edition, pages
Oxford: Elsevier, 2024
Keywords
Blockchain, Chebyshev chaotic map, Internet of medical things, Privacy-preserving, Three-factor
National Category
Communication Systems
Identifiers
urn:nbn:se:hh:diva-51669 (URN)10.1016/j.eswa.2023.121329 (DOI)001075477100001 ()2-s2.0-85170423010 (Scopus ID)
Available from: 2023-10-09 Created: 2023-10-09 Last updated: 2024-01-17Bibliographically approved
Tian, S., Li, L., Li, W., Ran, H., Ning, X. & Tiwari, P. (2024). A survey on few-shot class-incremental learning. Neural Networks, 169, 307-324
Open this publication in new window or tab >>A survey on few-shot class-incremental learning
Show others...
2024 (English)In: Neural Networks, ISSN 0893-6080, E-ISSN 1879-2782, Vol. 169, p. 307-324Article, review/survey (Refereed) Published
Abstract [en]

Large deep learning models are impressive, but they struggle when real-time data is not available. Few-shot class-incremental learning (FSCIL) poses a significant challenge for deep neural networks to learn new tasks from just a few labeled samples without forgetting the previously learned ones. This setup can easily leads to catastrophic forgetting and overfitting problems, severely affecting model performance. Studying FSCIL helps overcome deep learning model limitations on data volume and acquisition time, while improving practicality and adaptability of machine learning models. This paper provides a comprehensive survey on FSCIL. Unlike previous surveys, we aim to synthesize few-shot learning and incremental learning, focusing on introducing FSCIL from two perspectives, while reviewing over 30 theoretical research studies and more than 20 applied research studies. From the theoretical perspective, we provide a novel categorization approach that divides the field into five subcategories, including traditional machine learning methods, meta learning-based methods, feature and feature space-based methods, replay-based methods, and dynamic network structure-based methods. We also evaluate the performance of recent theoretical research on benchmark datasets of FSCIL. From the application perspective, FSCIL has achieved impressive achievements in various fields of computer vision such as image classification, object detection, and image segmentation, as well as in natural language processing and graph. We summarize the important applications. Finally, we point out potential future research directions, including applications, problem setups, and theory development. Overall, this paper offers a comprehensive analysis of the latest advances in FSCIL from a methodological, performance, and application perspective. © 2023 The Author(s)

Place, publisher, year, edition, pages
Oxford: Elsevier, 2024
Keywords
Catastrophic forgetting, Class-incremental learning, Few-shot learning, Overfitting, Performance evaluation
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:hh:diva-52064 (URN)10.1016/j.neunet.2023.10.039 (DOI)2-s2.0-85175341096 (Scopus ID)
Note

Funding: This work is supported by the National Natural Science Foundation of China (Grant No. 62373343) and the Beijing Natural Science Foundation, China (No. L233036).

Available from: 2023-11-17 Created: 2023-11-17 Last updated: 2023-11-17Bibliographically approved
Liu, J., Guan, S., Zou, Q., Wu, H., Tiwari, P. & Ding, Y. (2024). AMDGT: Attention aware multi-modal fusion using a dual graph transformer for drug–disease associations prediction. Knowledge-Based Systems, 284, Article ID 111329.
Open this publication in new window or tab >>AMDGT: Attention aware multi-modal fusion using a dual graph transformer for drug–disease associations prediction
Show others...
2024 (English)In: Knowledge-Based Systems, ISSN 0950-7051, E-ISSN 1872-7409, Vol. 284, article id 111329Article in journal (Refereed) Published
Abstract [en]

Identification of new indications for existing drugs is crucial through the various stages of drug discovery. Computational methods are valuable in establishing meaningful associations between drugs and diseases. However, most methods predict the drug–disease associations based solely on similarity data, neglecting valuable biological and chemical information. These methods often use basic concatenation to integrate information from different modalities, limiting their ability to capture features from a comprehensive and in-depth perspective. Therefore, a novel multimodal framework called AMDGT was proposed to predict new drug associations based on dual-graph transformer modules. By combining similarity data and complex biochemical information, AMDGT understands the multimodal feature fusion of drugs and diseases effectively and comprehensively with an attention-aware modality interaction architecture. Extensive experimental results indicate that AMDGT surpasses state-of-the-art methods in real-world datasets. Moreover, case and molecular docking studies demonstrated that AMDGT is an effective tool for drug repositioning. Our code is available at GitHub: https://github.com/JK-Liu7/AMDGT. © 2023 The Author(s)

Place, publisher, year, edition, pages
Amsterdam: Elsevier, 2024
Keywords
Attention mechanism, Drug repositioning, Drug–disease associations, Graph transformer, Multimodal learning
National Category
Pharmaceutical Sciences
Identifiers
urn:nbn:se:hh:diva-52420 (URN)10.1016/j.knosys.2023.111329 (DOI)2-s2.0-85181175754 (Scopus ID)
Note

Funding: The National Natural Science Foundation of China(62073231, 62176175, 62172076), National Research Project (2020YFC2006602), Provincial Key Laboratory for Computer Information Processing Technology, Soochow University, China (KJS2166), Opening Topic Fund of Big Data Intelligent Engineering Laboratory of Jiangsu Province, China (SDGC2157), Postgraduate Research and Practice Innovation Program of Jiangsu Province, China, Zhejiang Provincial Natural Science Foundation of China (Grant No. LY23F020003), and the Municipal Government of Quzhou, China (Grant No. 2023D038).

Available from: 2024-01-18 Created: 2024-01-18 Last updated: 2024-01-18Bibliographically approved
Wu, H., Liu, J., Jiang, T., Zou, Q., Qi, S., Cui, Z., . . . Ding, Y. (2024). AttentionMGT-DTA: A multi-modal drug-target affinity prediction using graph transformer and attention mechanism. Neural Networks, 169, 623-636
Open this publication in new window or tab >>AttentionMGT-DTA: A multi-modal drug-target affinity prediction using graph transformer and attention mechanism
Show others...
2024 (English)In: Neural Networks, ISSN 0893-6080, E-ISSN 1879-2782, Vol. 169, p. 623-636Article in journal (Refereed) Published
Abstract [en]

The accurate prediction of drug-target affinity (DTA) is a crucial step in drug discovery and design. Traditional experiments are very expensive and time-consuming. Recently, deep learning methods have achieved notable performance improvements in DTA prediction. However, one challenge for deep learning-based models is appropriate and accurate representations of drugs and targets, especially the lack of effective exploration of target representations. Another challenge is how to comprehensively capture the interaction information between different instances, which is also important for predicting DTA. In this study, we propose AttentionMGT-DTA, a multi-modal attention-based model for DTA prediction. AttentionMGT-DTA represents drugs and targets by a molecular graph and binding pocket graph, respectively. Two attention mechanisms are adopted to integrate and interact information between different protein modalities and drug-target pairs. The experimental results showed that our proposed model outperformed state-of-the-art baselines on two benchmark datasets. In addition, AttentionMGT-DTA also had high interpretability by modeling the interaction strength between drug atoms and protein residues. Our code is available at https://github.com/JK-Liu7/AttentionMGT-DTA. © 2023 The Author(s)

Place, publisher, year, edition, pages
Oxford: Elsevier, 2024
Keywords
Attention mechanism, Drug–target affinity, Graph neural network, Graph transformer, Multi-modal learning
National Category
Computer Sciences
Identifiers
urn:nbn:se:hh:diva-52421 (URN)10.1016/j.neunet.2023.11.018 (DOI)37976593 (PubMedID)2-s2.0-85181262524 (Scopus ID)
Note

Funding: The National Natural Science Foundation of China(62073231, 62176175, 62172076), National Research Project (2020YFC2006602), Provincial Key Laboratory for Computer Information Processing Technology, Soochow University (KJS2166), Opening Topic Fund of Big Data Intelligent Engineering Laboratory of Jiangsu Province (SDGC2157), Postgraduate Research and Practice Innovation Program of Jiangsu Province, Zhejiang Provincial Natural Science Foundation of China (Grant No. LY23F020003), and the Municipal Government of Quzhou, China (Grant No. 2023D038).

Available from: 2024-01-18 Created: 2024-01-18 Last updated: 2024-01-18Bibliographically approved
Ning, X., Yu, Z., Li, L., Li, W. & Tiwari, P. (2024). DILF: Differentiable rendering-based multi-view Image–Language Fusion for zero-shot 3D shape understanding. Information Fusion, 102, 1-12, Article ID 102033.
Open this publication in new window or tab >>DILF: Differentiable rendering-based multi-view Image–Language Fusion for zero-shot 3D shape understanding
Show others...
2024 (English)In: Information Fusion, ISSN 1566-2535, E-ISSN 1872-6305, Vol. 102, p. 1-12, article id 102033Article in journal (Refereed) Published
Abstract [en]

Zero-shot 3D shape understanding aims to recognize “unseen” 3D categories that are not present in training data. Recently, Contrastive Language–Image Pre-training (CLIP) has shown promising open-world performance in zero-shot 3D shape understanding tasks by information fusion among language and 3D modality. It first renders 3D objects into multiple 2D image views and then learns to understand the semantic relationships between the textual descriptions and images, enabling the model to generalize to new and unseen categories. However, existing studies in zero-shot 3D shape understanding rely on predefined rendering parameters, resulting in repetitive, redundant, and low-quality views. This limitation hinders the model's ability to fully comprehend 3D shapes and adversely impacts the text–image fusion in a shared latent space. To this end, we propose a novel approach called Differentiable rendering-based multi-view Image–Language Fusion (DILF) for zero-shot 3D shape understanding. Specifically, DILF leverages large-scale language models (LLMs) to generate textual prompts enriched with 3D semantics and designs a differentiable renderer with learnable rendering parameters to produce representative multi-view images. These rendering parameters can be iteratively updated using a text–image fusion loss, which aids in parameters’ regression, allowing the model to determine the optimal viewpoint positions for each 3D object. Then a group-view mechanism is introduced to model interdependencies across views, enabling efficient information fusion to achieve a more comprehensive 3D shape understanding. Experimental results can demonstrate that DILF outperforms state-of-the-art methods for zero-shot 3D classification while maintaining competitive performance for standard 3D classification. The code is available at https://github.com/yuzaiyang123/DILP. © 2023 The Author(s)

Place, publisher, year, edition, pages
Amsterdam: Elsevier, 2024
Keywords
Differentiable rendering, Information fusion, Text–image fusion, Zero-shot 3D shape understanding
National Category
Language Technology (Computational Linguistics)
Identifiers
urn:nbn:se:hh:diva-51777 (URN)10.1016/j.inffus.2023.102033 (DOI)001084085200001 ()2-s2.0-85172076357 (Scopus ID)
Note

Funding agency:

National Natural Science Foundation of China (NSFC) Grant number: 6237334

Beijing Natural Science Foundation Grant number: L233036

Available from: 2023-11-16 Created: 2023-11-16 Last updated: 2023-11-22Bibliographically approved
Han, R., Peng, T., Wang, B., Liu, L., Tiwari, P. & Wan, X. (2024). Document-level Relation Extraction with Relation Correlations. Neural Networks, 171, 14-24
Open this publication in new window or tab >>Document-level Relation Extraction with Relation Correlations
Show others...
2024 (English)In: Neural Networks, ISSN 0893-6080, E-ISSN 1879-2782, Vol. 171, p. 14-24Article in journal (Refereed) Published
Abstract [en]

Document-level relation extraction faces two often overlooked challenges: long-tail problem and multi-label problem. Previous work focuses mainly on obtaining better contextual representations for entity pairs, hardly address the above challenges. In this paper, we analyze the co-occurrence correlation of relations, and introduce it into the document-level relation extraction task for the first time. We argue that the correlations can not only transfer knowledge between data-rich relations and data-scarce ones to assist in the training of long-tailed relations, but also reflect semantic distance guiding the classifier to identify semantically close relations for multi-label entity pairs. Specifically, we use relation embedding as a medium, and propose two co-occurrence prediction sub-tasks from both coarse- and fine-grained perspectives to capture relation correlations. Finally, the learned correlation-aware embeddings are used to guide the extraction of relational facts. Substantial experiments on two popular datasets (i.e., DocRED and DWIE) are conducted, and our method achieves superior results compared to baselines. Insightful analysis also demonstrates the potential of relation correlations to address the above challenges. The data and code are released at https://github.com/RidongHan/DocRE-Co-Occur. © 2023 Elsevier Ltd

Place, publisher, year, edition, pages
Oxford: Elsevier, 2024
Keywords
Co-occurrence, Document-level, Multi-task, Relation Correlations, Relation Extraction
National Category
Language Technology (Computational Linguistics)
Identifiers
urn:nbn:se:hh:diva-52317 (URN)10.1016/j.neunet.2023.11.062 (DOI)2-s2.0-85179586274 (Scopus ID)
Note

Funding: This work is supported by the National Natural Science Foundation of China under grant No. 61872163 and 61806084, Jilin Province Key Scientific and Technological Research and Development Project under grant No. 20210201131GX, and Jilin Provincial Education Department Project under grant No. JJKH20190160KJ.

Available from: 2023-12-22 Created: 2023-12-22 Last updated: 2024-01-17Bibliographically approved
Ran, H., Li, W., Li, L., Tian, S., Ning, X. & Tiwari, P. (2024). Learning optimal inter-class margin adaptively for few-shot class-incremental learning via neural collapse-based meta-learning. Information Processing & Management, 61(3), Article ID 103664.
Open this publication in new window or tab >>Learning optimal inter-class margin adaptively for few-shot class-incremental learning via neural collapse-based meta-learning
Show others...
2024 (English)In: Information Processing & Management, ISSN 0306-4573, E-ISSN 1873-5371, Vol. 61, no 3, article id 103664Article in journal (Refereed) In press
Abstract [en]

Few-Shot Class-Incremental Learning (FSCIL) aims to learn new classes incrementally with a limited number of samples per class. It faces issues of forgetting previously learned classes and overfitting on few-shot classes. An efficient strategy is to learn features that are discriminative in both base and incremental sessions. Current methods improve discriminability by manually designing inter-class margins based on empirical observations, which can be suboptimal. The emerging Neural Collapse (NC) theory provides a theoretically optimal inter-class margin for classification, serving as a basis for adaptively computing the margin. Yet, it is designed for closed, balanced data, not for sequential or few-shot imbalanced data. To address this gap, we propose a Meta-learning- and NC-based FSCIL method, MetaNC-FSCIL, to compute the optimal margin adaptively and maintain it at each incremental session. Specifically, we first compute the theoretically optimal margin based on the NC theory. Then we introduce a novel loss function to ensure that the loss value is minimized precisely when the inter-class margin reaches its theoretically best. Motivated by the intuition that “learn how to preserve the margin” matches the meta-learning's goal of “learn how to learn”, we embed the loss function in base-session meta-training to preserve the margin for future meta-testing sessions. Experimental results demonstrate the effectiveness of MetaNC-FSCIL, achieving superior performance on multiple datasets. The code is available at https://github.com/qihangran/metaNC-FSCIL. © 2024 The Author(s)

Place, publisher, year, edition, pages
London: Elsevier, 2024
Keywords
Few-shot class-incremental learning, Meta-learning, Neural collapse
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:hh:diva-52738 (URN)10.1016/j.ipm.2024.103664 (DOI)2-s2.0-85183769285 (Scopus ID)
Note

his work is supported by the National Natural Science Foundation of China (No. 62373343); and the Beijing Natural Science Foundation, China (No. L233036).

Available from: 2024-02-23 Created: 2024-02-23 Last updated: 2024-02-23Bibliographically approved
Samareh-Jahani, M., Saberi-Movahed, F., Eftekhari, M., Aghamollaei, G. & Tiwari, P. (2024). Low-Redundant Unsupervised Feature Selection based on Data Structure Learning and Feature Orthogonalization. Expert systems with applications, 240, Article ID 122556.
Open this publication in new window or tab >>Low-Redundant Unsupervised Feature Selection based on Data Structure Learning and Feature Orthogonalization
Show others...
2024 (English)In: Expert systems with applications, ISSN 0957-4174, E-ISSN 1873-6793, Vol. 240, article id 122556Article in journal (Refereed) In press
Abstract [en]

An orthogonal representation of features can offer valuable insights into feature selection as it aims to find a representative subset of features in which all features can be accurately reconstructed by a set of features that are linearly independent, uncorrelated, and perpendicular to each other. In this paper, a novel feature selection method, called Low-Redundant Unsupervised Feature Selection based on Data Structure Learning and Feature Orthogonalization (LRDOR), is presented. In the first stage, the suggested LRDOR method makes use of the QR factorization over the whole set of features to find the orthogonal representation of the feature space. Then, LRDOR utilizes the directional distance based on the matrix factorization in order to determine the distance among the set of considered features and the orthogonal set obtained from the original features. Moreover, LRDOR simultaneously takes into account the local correlation of features and the data manifold as dual information into the feature selection process, which can lead to a low level of redundancy and maintain the geometric data structure when reducing the data dimension. In addition to providing a proficient iterative algorithm, the convergence analysis is also included to solve the objective function of LRDOR. The results of the experiments demonstrate that for clustering purposes, LRDOR works better than other related state-of-the-art unsupervised feature selection methods on ten real-world datasets. © 2023 Elsevier Ltd

Place, publisher, year, edition, pages
Oxford: Elsevier, 2024
Keywords
Data manifold, Feature selection, Local correlation, Matrix factorization, Orthogonalization
National Category
Computer Sciences
Identifiers
urn:nbn:se:hh:diva-52171 (URN)10.1016/j.eswa.2023.122556 (DOI)001116947100001 ()2-s2.0-85177176847 (Scopus ID)
Available from: 2023-12-01 Created: 2023-12-01 Last updated: 2023-12-21Bibliographically approved
Yu, Z., Tiwari, P., Hou, L., Li, L., Li, W., Jiang, L. & Ning, X. (2024). MV-ReID: 3D Multi-view Transformation Network for Occluded Person Re-Identification. Knowledge-Based Systems, 283, Article ID 111200.
Open this publication in new window or tab >>MV-ReID: 3D Multi-view Transformation Network for Occluded Person Re-Identification
Show others...
2024 (English)In: Knowledge-Based Systems, ISSN 0950-7051, E-ISSN 1872-7409, Vol. 283, article id 111200Article in journal (Refereed) Published
Abstract [en]

Re-identification (ReID) of occluded persons is a challenging task due to the loss of information in scenes with occlusions. Most existing methods for occluded ReID use 2D-based network structures to directly extract representations from 2D RGB (red, green, and blue) images, which can result in reduced performance in occluded scenes. However, since a person is a 3D non-grid object, learning semantic representations in a 2D space can limit the ability to accurately profile an occluded person. Therefore, it is crucial to explore alternative approaches that can effectively handle occlusions and leverage the full 3D nature of a person. To tackle these challenges, in this study, we employ a 3D view-based approach that fully utilizes the geometric information of 3D objects while leveraging advancements in 2D-based networks for feature extraction. Our study is the first to introduce a 3D view-based method in the areas of holistic and occluded ReID. To implement this approach, we propose a random rendering strategy that converts 2D RGB images into 3D multi-view images. We then use a 3D Multi-View Transformation Network for ReID (MV-ReID) to group and aggregate these images into a unified feature space. Compared to 2D RGB images, multi-view images can reconstruct occluded portions of a person in 3D space, enabling a more comprehensive understanding of occluded individuals. The experiments on benchmark datasets demonstrate that the proposed method achieves state-of-the-art results on occluded ReID tasks and exhibits competitive performance on holistic ReID tasks. These results also suggest that our approach has the potential to solve occlusion problems and contribute to the field of ReID. The source code and dataset are available at https://github.com/yuzaiyang123/MV-Reid. © 2023 Elsevier B.V.

Place, publisher, year, edition, pages
Amsterdam: Elsevier, 2024
Keywords
3D multi-view learning, Occluded person Re-identification
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:hh:diva-52204 (URN)10.1016/j.knosys.2023.111200 (DOI)2-s2.0-85177792581 (Scopus ID)
Note

Funding: The National Natural Science Foundation of China No. 62373343, Beijing Natural Science Foundation No. L233036.

Available from: 2023-12-08 Created: 2023-12-08 Last updated: 2023-12-08Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-2851-4260

Search in DiVA

Show all publications