留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

Particle auto-statistics and measurement of the spherical powder for 3D printing based on deep learning

Wang Yichao Zhang Zheng Huang Haizhou Lin Wenxiong

王祎超, 张政, 黄海洲, 林文雄. 基于深度学习的3D打印球形粉末颗粒自动统计与测量[J]. 红外与激光工程, 2021, 50(10): 2021G004. doi: 10.3788/IRLA2021G004
引用本文: 王祎超, 张政, 黄海洲, 林文雄. 基于深度学习的3D打印球形粉末颗粒自动统计与测量[J]. 红外与激光工程, 2021, 50(10): 2021G004. doi: 10.3788/IRLA2021G004
Wang Yichao, Zhang Zheng, Huang Haizhou, Lin Wenxiong. Particle auto-statistics and measurement of the spherical powder for 3D printing based on deep learning[J]. Infrared and Laser Engineering, 2021, 50(10): 2021G004. doi: 10.3788/IRLA2021G004
Citation: Wang Yichao, Zhang Zheng, Huang Haizhou, Lin Wenxiong. Particle auto-statistics and measurement of the spherical powder for 3D printing based on deep learning[J]. Infrared and Laser Engineering, 2021, 50(10): 2021G004. doi: 10.3788/IRLA2021G004

基于深度学习的3D打印球形粉末颗粒自动统计与测量

doi: 10.3788/IRLA2021G004
详细信息
  • 中图分类号: TP391.4

Particle auto-statistics and measurement of the spherical powder for 3D printing based on deep learning

Funds: 国家重点研发计划(2018YFB0407403)
More Information
    Author Bio:

    王祎超, 男,硕士生, 主要从事钛合金增材制造选区激光熔化成型方面的研究

    林文雄,男,研究员,博士生导师,博士,主要从事金属增材制造技术、全固态激光技术和非线性光学技术等领域的基础与工程化方面的研究

  • 摘要: 随着金属粉末3D打印技术的不断发展,如何从显微图像中准确提取粉末颗粒的粒形粒径和球化率信息变得越来越重要。文中基于深度学习算法Mask R-CNN,提出了一种电镜图像球形粉末颗粒自动统计与测量的方法。该方法可对单幅显微图像上超过1 000个颗粒进行自动识别,有效检测电镜图像中的遮挡颗粒,并且生成粒径分布、球形度和球化率统计结果。相比传统图像分割算法,颗粒识别准确度提升了23.6%。相比激光干涉仪的粒径分布测量结果,文中提出的方法可以将位于较大球形粉末上黏附的小颗粒也有效识别出来。
  • Figure  1.  Flowchart of the powder microscopy image automatic analysis system

    Figure  2.  (a) Original SEM image (2 048×2 048 pixel), which is cropped into 16 parts; (b) Characteristic image labeled with LabelMe(512×512 pixel); (c) The corresponding image mask of (b)

    Figure  3.  Loss-epoch curve during train process

    Figure  4.  Flowchart of transferring and rough merging process of one sub-image

    Figure  5.  Illustration of two kinds of IoU & IoS in rough merging and precise merging processes, respectively. (a) IoU & IoS of two circumscribed rectangles; (b) IoU & IoS of two masks; (c) One example of the usage of IoS

    Figure  6.  (a) Illustration of particle boundary smoothing and error compensation; (b) Fitted perimeter and area residual function based on scattered deviation values of standard circles

    Figure  7.  Predicted results and comparation with the Phenom ProSuite Software Particlemetric. (a) Raw image; (b) Output segmentation result of Particlemetric; (c) Four enlarged details region of (b); (d) Output result of proposed method; (e) Four enlarged details region of (d)

    Figure  8.  Statistical analysis results and comparation. (a) PSD results measured by the Particlemetric, our method and laser diffraction technique, respectively; (b) Degree of sphericity distribution (DSD) results measured by Particlemetric and proposed method

  • [1] Cooke S, Ahmadi K, Willerth S, et al. Metal additive manufacturing:Technology, metallurgy and modelling [J]. Journal of Manufacturing Processes, 2020, 57: 978-1003. doi:  10.1016/j.jmapro.2020.07.025
    [2] Qian M, Froes F H. Titanium Powder Metallurgy: Science, Technology and Applications[M]. Oxford: Butterworth-Heinemann, 2015.
    [3] Strondl A, Lyckfeldt O, Brodin H K, et al. Characterization and control of powder properties for additive manufacturing [J]. JOM, 2015, 67(3): 549-554. doi:  10.1007/s11837-015-1304-0
    [4] Sun P, Fang Z Z, Zhang Y, et al. Review of the methods for production of spherical Ti and Ti alloy powder [J]. JOM, 2017, 69(10): 1853-1860. doi:  10.1007/s11837-017-2513-5
    [5] Wei W-H, Wang L-Z, Chen T, et al. Study on the flow properties of Ti-6Al-4V powders prepared by radio-frequency plasma spheroidization [J]. Advanced Powder Technology, 2017, 28(9): 2431-2437. doi:  10.1016/j.apt.2017.06.025
    [6] Slotwinski J A, Garboczi E J, Stutzman P E, et al. Characterization of metal powders used for additive manufacturing [J]. Journal of Research of the National Institute of Standards and Technology, 2014, 119: 460. doi:  10.6028/jres.119.018
    [7] Spierings A B, Voegtlin M, Bauer T U, et al. Powder flowability characterisation methodology for powder-bed-based metal additive manufacturing [J]. Progress in Additive Manufacturing, 2016, 1(1): 9-20. doi:  10.1007/s40964-015-0001-4
    [8] ISO 13322-1. Particle size analysis-Image analysis methods-Part 1: Static image analysis methods[S]. Switzerland: [s.n.], 2014.
    [9] Scientific T. Thermo Scientific ParticleMetric [OL]. [2021-03-21].https://www.thermofisher.cn/order/catalog/product/PARTICLEMETRIC?SID=srch-srp-PARTICLEMETRIC#/PARTICLEMETRIC?SID=srch-srp-PARTICLEMETRIC.
    [10] ISO 14488. Particulate materials-Sampling and sample splitting for the determina-tion of particulate properties[S]. Switzerland: [s.n.], 2007.
    [11] Chong Z, Chaoyang M, Zicheng W, et al. Spheroidization of TC4 (Ti6Al4V) alloy powders by radio frequency plasma processing [J]. Rare Metal Materials and Engineering, 2019, 48(2): 446-451.
    [12] Oktay A B, Gurses A. Automatic detection, localization and segmentation of nano-particles with deep learning in microscopy images [J]. Micron, 2019, 120: 113-119. doi:  10.1016/j.micron.2019.02.009
    [13] Rueden C T, Schindelin J, Hiner M C, et al. ImageJ2:ImageJ for the next generation of scientific image data [J]. BMC bioinformatics, 2017, 18(1): 1-26. doi:  10.1186/s12859-017-1934-z
    [14] Grant T, Rohou A, Grigorieff N. cisTEM, user-friendly software for single-particle image processing [J]. eLife, 2018, 7: e35383. doi:  10.7554/eLife.35383
    [15] He K, Gkioxari G, Dollár P, et al. Mask r-cnn[C]//Proceedings of the IEEE International Conference on Computer Vision, 2017.
    [16] Yu Y, Zhang K, Yang L, et al. Fruit detection for strawberry harvesting robot in non-structural environment based on Mask-RCNN [J]. Computers and Electronics in Agriculture, 2019, 163: 104846. doi:  10.1016/j.compag.2019.06.001
    [17] Frei M, Kruis F E. Image-based size analysis of agglomerated and partially sintered particles via convolutional neural networks [J]. Powder Technology, 2020, 360: 324-336. doi:  10.1016/j.powtec.2019.10.020
    [18] Wu Y, Lin M, Rohani S. Particle characterization with on-line imaging and neural network image analysis [J]. Chemical Engineering Research and Design, 2020, 157: 114-125. doi:  10.1016/j.cherd.2020.03.004
    [19] Huang H, Luo J, Tutumluer E, et al. Automated segmentation and morphological analyses of stockpile aggregate images using deep convolutional neural networks [J]. Transportation Research Record, 2020, 2674(10): 285-298. doi:  10.1177/0361198120943887
    [20] Ruiz-Santaquiteria J, Bueno G, Deniz O, et al. Semantic versus instance segmentation in microscopic algae detection [J]. Engineering Applications of Artificial Intelligence, 2020, 87: 103271. doi:  10.1016/j.engappai.2019.103271
    [21] Russell B C, Torralba A, Murphy K P, et al. LabelMe:a database and web-based tool for image annotation [J]. International Journal of Computer Vision, 2008, 77(1-3): 157-173. doi:  10.1007/s11263-007-0090-8
    [22] Ren S, He K, Girshick R, et al. Faster r-cnn:Towards real-time object detection with region proposal networks [J]. Advances in Neural Information Processing Systems, 2015, 28: 91-99.
    [23] He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016.
    [24] Canziani A, Paszke A, Culurciello E. An analysis of deep neural network models for practical applications [J]. arXiv preprint, 2016: arXiv:1605.07678.
    [25] Lin T Y, Maire M, Belongie S, et al. Microsoft coco: Common objects in context[C]//European Conference on Computer Vision, 2014.
    [26] Vangla P, Roy N, Gali M L. Image based shape characterization of granular materials and its effect on kinematics of particle motion [J]. Granular Matter, 2018, 20(1): 1-19. doi:  10.1007/s10035-017-0776-8
    [27] De Boor C, De Boor C. A Practical Guide to Splines[M]. New York: Springer-Verlag, 1978.
    [28] Hentschel M L, Page N W. Selection of descriptors for particle shape characterization [J]. Particle & Particle Systems Characterization, 2003, 20(1): 25-38. doi:  10.1002/ppsc.200390002
    [29] Özbilen S. Satellite formation mechanism in gas atomised powders [J]. Powder Metallurgy, 1999, 42(1): 70-78. doi:  10.1179/pom.1999.42.1.70
  • [1] 钟锦鑫, 尹维, 冯世杰, 陈钱, 左超.  基于深度学习的散斑投影轮廓术 . 红外与激光工程, 2020, 49(6): 20200011-1-20200011-11. doi: 10.3788/IRLA20200011
    [2] 唐聪, 凌永顺, 杨华, 杨星, 路远.  基于深度学习的红外与可见光决策级融合检测 . 红外与激光工程, 2019, 48(6): 626001-0626001(15). doi: 10.3788/IRLA201948.0626001
    [3] 郑珊珊, 杨婉琴, 司徒国海.  计算光学成像在散射中的应用 . 红外与激光工程, 2019, 48(6): 603005-0603005(15). doi: 10.3788/IRLA201948.0603005
    [4] 张秀, 周巍, 段哲民, 魏恒璐.  基于卷积稀疏自编码的图像超分辨率重建 . 红外与激光工程, 2019, 48(1): 126005-0126005(7). doi: 10.3788/IRLA201948.0126005
    [5] 王菲菲, 李学彬, 郑显明, 张文忠, 罗涛, 朱文越, 成巍, 邓志武.  相对湿度和风速对海洋大气气溶胶粒子谱的影响 . 红外与激光工程, 2019, 48(S1): 83-88. doi: 10.3788/IRLA201948.S117003
    [6] 赵永强, 李宁, 张鹏, 姚嘉昕, 潘泉.  红外偏振感知与智能处理 . 红外与激光工程, 2018, 47(11): 1102001-1102001(7). doi: 10.3788/IRLA201847.1102001
    [7] 耿磊, 梁晓昱, 肖志涛, 李月龙.  基于多形态红外特征与深度学习的实时驾驶员疲劳检测 . 红外与激光工程, 2018, 47(2): 203009-0203009(9). doi: 10.3788/IRLA201847.0203009
    [8] 余思泉, 韩志, 唐延东, 吴成东.  基于对抗生成网络的纹理合成方法 . 红外与激光工程, 2018, 47(2): 203005-0203005(6). doi: 10.3788/IRLA201847.0203005
    [9] 罗海波, 何淼, 惠斌, 常铮.  基于双模全卷积网络的行人检测算法(特邀) . 红外与激光工程, 2018, 47(2): 203001-0203001(8). doi: 10.3788/IRLA201847.0203001
    [10] 唐聪, 凌永顺, 郑科栋, 杨星, 郑超, 杨华, 金伟.  基于深度学习的多视窗SSD目标检测方法 . 红外与激光工程, 2018, 47(1): 126003-0126003(9). doi: 10.3788/IRLA201847.0126003
    [11] 姚旺, 刘云鹏, 朱昌波.  基于人眼视觉特性的深度学习全参考图像质量评价方法 . 红外与激光工程, 2018, 47(7): 703004-0703004(8). doi: 10.3788/IRLA201847.0703004
    [12] 郭强, 芦晓红, 谢英红, 孙鹏.  基于深度谱卷积神经网络的高效视觉目标跟踪算法 . 红外与激光工程, 2018, 47(6): 626005-0626005(6). doi: 10.3788/IRLA201847.0626005
    [13] 刘天赐, 史泽林, 刘云鹏, 张英迪.  基于Grassmann流形几何深度网络的图像集识别方法 . 红外与激光工程, 2018, 47(7): 703002-0703002(7). doi: 10.3788/IRLA201847.0703002
    [14] 唐聪, 凌永顺, 杨华, 杨星, 郑超.  基于深度学习物体检测的视觉跟踪方法 . 红外与激光工程, 2018, 47(5): 526001-0526001(11). doi: 10.3788/IRLA201847.0526001
    [15] 张秀玲, 侯代标, 张逞逞, 周凯旋, 魏其珺.  深度学习的MPCANet火灾图像识别模型设计 . 红外与激光工程, 2018, 47(2): 203006-0203006(6). doi: 10.3788/IRLA201847.0203006
    [16] 李方彪, 何昕, 魏仲慧, 何家维, 何丁龙.  生成式对抗神经网络的多帧红外图像超分辨率重建 . 红外与激光工程, 2018, 47(2): 203003-0203003(8). doi: 10.3788/IRLA201847.0203003
    [17] 王雪敏, 申晋, 徐敏, 黄钰, 高明亮, 刘伟, 王雅静.  多角度动态光散射角度误差对权重估计的影响 . 红外与激光工程, 2018, 47(10): 1017004-1017004(7). doi: 10.3788/IRLA201847.1017004
    [18] 罗海波, 许凌云, 惠斌, 常铮.  基于深度学习的目标跟踪方法研究现状与展望 . 红外与激光工程, 2017, 46(5): 502002-0502002(7). doi: 10.3788/IRLA201746.0502002
    [19] 鲁先洋, 李学彬, 秦武斌, 朱文越, 徐青山.  海洋大气气溶胶粒子谱分布及其消光特征分析 . 红外与激光工程, 2017, 46(12): 1211002-1211002(6). doi: 10.3788/IRLA201746.1211002
    [20] 曹丽霞, 赵军, 孔明, 单良, 郭天太.  基于改进的Chahine迭代算法的粒径分布反演 . 红外与激光工程, 2015, 44(9): 2837-2843.
  • 加载中
图(8)
计量
  • 文章访问数:  269
  • HTML全文浏览量:  103
  • PDF下载量:  41
  • 被引次数: 0
出版历程
  • 收稿日期:  2021-06-10
  • 修回日期:  2021-07-12
  • 刊出日期:  2021-10-20

Particle auto-statistics and measurement of the spherical powder for 3D printing based on deep learning

doi: 10.3788/IRLA2021G004
    作者简介:

    王祎超, 男,硕士生, 主要从事钛合金增材制造选区激光熔化成型方面的研究

    林文雄,男,研究员,博士生导师,博士,主要从事金属增材制造技术、全固态激光技术和非线性光学技术等领域的基础与工程化方面的研究

基金项目:  国家重点研发计划(2018YFB0407403)
  • 中图分类号: TP391.4

摘要: 随着金属粉末3D打印技术的不断发展,如何从显微图像中准确提取粉末颗粒的粒形粒径和球化率信息变得越来越重要。文中基于深度学习算法Mask R-CNN,提出了一种电镜图像球形粉末颗粒自动统计与测量的方法。该方法可对单幅显微图像上超过1 000个颗粒进行自动识别,有效检测电镜图像中的遮挡颗粒,并且生成粒径分布、球形度和球化率统计结果。相比传统图像分割算法,颗粒识别准确度提升了23.6%。相比激光干涉仪的粒径分布测量结果,文中提出的方法可以将位于较大球形粉末上黏附的小颗粒也有效识别出来。

English Abstract

    • Powder properties are of great importance in powder bed fusion(PBF), one of the popular metal additive manufacturing (also called 3D printing) methods currently[1-2]. Typically, spherical particles with proper size distribution support high flowability and density of the powder. A dense powder layer with uniform thickness can significantly improve the dimension accuracy during the melting process of PBF[3].

      Commonly, preparation method of the spherical powder includes plasma rotating electrode process(PREP), gas atomization (GA) and plasma spheroidization(PS)[4-5]. Properties of the prepared powder based on the above methods can be characterized by the particle size distribution(PSD), degree of sphericity(DS, also called degree of circularity in 2-dimentional), and spheroidization ratio(SR)[6-7].

      One of the most used methods to measure the PSD of metal powder is laser diffraction(LD), which detects and analyses the angular distribution of the scattered light produced by a laser beam passing through a diluted powder layer[6]. However, all the particles are assumed to perfectly spherical in the LD method, which is usually not the truth. Thus, shape information of the particles cannot be provided. Another method is by image analysis[8], where the morphology of particles can be clearly observed by scanning electron microscopy(SEM), before the calculations of the degree of sphericity and PSD with assistant software[9]. However, this method demands all particles to be spread sparsely, where the number of particles in the visual scope is limited. Meanwhile, the overlapped particle targets cannot be analyzed, resulting in the deviation in statistics[10]. Moreover, it is very time-consuming to count the non- spheres among total particles manually, which makes a challenge to the measurement of the spheroidization ratio[11-12].

      With the advance in SEM technology, abundant particle information can be withdrawn from the microscopy image with the existing image processing tools, such as ImageJ[13] and cisTEM[14]. However, these tools, mostly based on conventional edge-based and thresholding algorithms, are hard to discriminate overlapped particles and require considerable human working, which results in inconsistent processing results.

      In this work, the Mask R-CNN[15], one of the remarkable instance segmentation convolutional neural networks, was employed to implement auto-statistics and the measurement of microscopy images of the spherical powder. In the past few years, the Mask R-CNN has been used in many complex vision tasks[16-20]. Compared with traditional algorithms, the proposed model here is powerful in detecting the morphology of different particles, even though they are overlapped by upper particles.

    • Figure 1 depicts flow chart of the developed system, based on the instance segmentation results from the Mask R-CNN algorithm, which consists of particle size distribution, degree of sphericity and spheroidization ratio modules.

      Figure 1.  Flowchart of the powder microscopy image automatic analysis system

    • To train the Mask R-CNN model, powder microscopy images are collected with the requirement that powders with varied size and shape should be contained. The un-sifted Ti-6Al-3Nb-2Zr-1Mo alloy powder(provided by High Performance Powder Synthesis Lab, Fujian Innovation Academy, Chinese Academy of Sciences), prepared by radio frequency plasma spheroidization, was selected to construct the dataset. Before SEM, the powder sample is dispersed on the conductive tape and then blown to remove the unstuck powder by rubber suction bulb. Each SEM(using Phenom XL) image is magnified by 300 times to allow sufficient particles in a clear image with view field of 895 µm. The images were saved with a size of 2048×2048 pixel.

      To increase the detection accuracy and facilitate the use of Mask R-CNN, the original SEM image, is then cropped into 16 sub-images with equal-size of 512×512 pixel (Fig.2(a)). LabelMe is selected as the tool to manual label the sub-images[21]. Region of each particle is marked by a polygon with all vertex coordinates recorded in pixel dimension.

      Figure 2.  (a) Original SEM image (2 048×2 048 pixel), which is cropped into 16 parts; (b) Characteristic image labeled with LabelMe(512×512 pixel); (c) The corresponding image mask of (b)

      For ease of post-processing and statistical counting, 4 kinds of labels are adopted (Fig.1(f)): “ins_sphere” is for more than half occluded spherical particle, “inc_sphere” for less than half occluded spherical particle, “com_sphere” for complete spherical particle, and “non_sphere” for non-spherical particle. Characteristic image labeled by LabelMe is illustrated in Fig.2(b). 100 sub-images(Fig.1(d)), with total 14835 labeled particles (“com_sphere”: 7073, “inc_sphere”: 4396, “ins_sphere”: 1973, “non_sphere”: 1393), was applied for training. Figure2(c) shows one mask image, which generated from corresponding labeled image, as the input data to training process.

    • The Mask R-CNN, extended from Faster R-CNN[22], is a state-of-the-art two stage instance segmentation architecture (Fig.1(e)). Firstly, based on the input image, the proposals about the regions where there might be an object is generated. Secondly, Mask R-CNN predicts the class of the object, refines the bounding box, and generates a mask in pixel level of the object based on the proposals.

      At the first stage, the input images are processed by a feature extraction network, which also called backbone, to construct feature maps containing spatial semantic information at different scales. ResNet-101[23] was used in this stage, which offers high accuracies at comparably low computational costs, without facing the vanishment of gradients[24]. Then, a set of regions of interest (RoI) that may contain objects, are proposed by the regional proposal network(RPN), based on the feature maps.

      At the second stage, feature maps for each RoI that proposed by RPN, are cropped by the RoI alignment, and resized with the same size for the following convolution networks. The RoI alignment also fixes misaligned features with low-resolution in the feature maps. Next, cropped feature maps that contain objects are fed into a classifier, which conducts classification and bounding box regression. After that, one object is enclosed by each bounding box. Finally, the original feature maps are cropped again using these bounding boxes, after being resized, the newly cropped feature maps are fed into a fully convolution network to conduct semantic segmentation and predict the binary mask.

      After the prepared training image dataset is applied to train model parameters by Mask R-CNN, a set of pre-trained model weights on MS COCO dataset[25] is adopted to fine-tune the developed model instead of training from scratch. During the training, the kernel weights and bias values are automatically altered to minimize the train loss, which is the difference between the input labeled masks and the network output masks. The batch size(the number of samples that will be propagated through the network) is set to 1, and the epoch(iteration times over all training data) is 100. The epoch & train loss curve is shown in Fig.3. This model was trained on an NVIDIA Quadro RTX 4000 GPU, which took 123 hours in total.

      Figure 3.  Loss-epoch curve during train process

    • The raw input image size is 2048×2048 pixel, it is cropped into 28 sub-images (Fig.1(b)), in which 16 sub-images are with equal size (512×512 pixel) and 12 sub-images are with border strips among the 16 sub-images (Fig.2(a)), the green one is with 256×1024 pixel and the blue one with 1024×256 pixel. The width of border strips can be adjusted according to the maximum size of the powder in one image.

      Before entering the 28 sub-images into the trained model, each sub-image is transferred into 4 images by maintaining itself, making a 180° rotation, flipping horizontally, and flipping vertically. Next, 4 output images are roughly merged into one sub-image. The purpose of this step is to improve recognition rate of the particle and reduce the wrong classification. Figure4 illustrates the transferring and merging process. In the rough merging process, one particle’s 4 masks(maybe less than 4) are merged simply depending on the intersection-over-union(IoU) and the intersection-over-self(IoS) of their circumscribed rectangle(Fig.5(a)). The usage of IoS here plays the role of precenting the occurrence of mis-merging, as shown in Fig.5(c). Each two masks satisfying the condition of IoURec>0.7 ∩ IoSRec-A>0.7 ∩ IoSRec-B>0.7 will be merged. Several unrecognized particles in Fig.4(c) are recognized after rough merging. The merged image contains some un-merged small masks, which are supposed to belong to one same particle, will be merged correctly in the next precise merging process.

      Figure 4.  Flowchart of transferring and rough merging process of one sub-image

      Figure 5.  Illustration of two kinds of IoU & IoS in rough merging and precise merging processes, respectively. (a) IoU & IoS of two circumscribed rectangles; (b) IoU & IoS of two masks; (c) One example of the usage of IoS

      The next step is to merge all the 28 predicted sub-images(Fig.1(f)) via precisely merging. The IoU and IoS of two masks instead of their circumscribed rectangles are adopted to conduct more precisely merging(Fig.5(b)), which consumes much more computation resources than the IoU & IoS with circumscribed rectangles, especially when thousands of particles are in one input image. In this process, each two masks that satisfy the condition of $Io{U_{Mask}} > 0.4 \cup Io{S_{Mask - A}}> 0.6\cup Io{S_{Mask - B}} > 0.6$ will be merged. 16 sub-images were stunk by 12 border strips stick in a tape-like style to reunite back the separated half particles at the border of two adjacent sub-images.

    • The error of the particle measurement comes from two facts. One is the deviation of boundary calculation due to the aliasing effect by the square pixels tessellation[26]. Another is the residual prediction result and ground truth, comprising errors of the labeled region and the predict model. The B-spline curve is introduced to smooth boundary of the particle mask and reduce the former error, which can be described as the following parameterized function[27]:

      $$Sp(t) = \sum\limits_{i = 0}^{n + k} {{p_i}N_i^k(t)} $$ (1)

      where $k$ is the degree of spline curve, ${p_i}$ is the coordinate of $i$th control point, $N_i^k(t)$ is the $i$th B-spline basis function with degree of $k$, it can be computed as follows:

      $$\begin{array}{l} N_i^0(t) = \left\{ \begin{gathered} 0,{\rm{ }}{t_i} \leqslant t \leqslant {t_{i + 1}} \\ 1,{\rm{ }}\rm otherwise \\ \end{gathered} \right. \\ N_i^k(t) = \dfrac{{t - {t_i}}}{{{t_{i + k}} - {t_i}}}N_i^{k - 1}(t) + \dfrac{{{t_{i + k + 1}} - t}}{{{t_{i + k + 1}} - {t_{i + 1}}}}N_{i + 1}^{k - 1}(t) \\ \end{array} $$ (2)

      where ${t_i}$ is the $i$th element in a uniformly distributed knot sequence that ranges from 0 to 1. During the smoothing process, the contour points of the mask are extracted and each 5 points of them is selected to be the control points. As shown in Fig.6(a), the green points are the contour points, the red one the control points, and the red curve the B-spline curve.

      Figure 6.  (a) Illustration of particle boundary smoothing and error compensation; (b) Fitted perimeter and area residual function based on scattered deviation values of standard circles

      To compensate the second residual error, a set of standard circles with evenly spaced diameter from 5 µm to 100 µm are predicted before predicting the input image. The deviation of output result and ground truth of these standard circles can be calculated, and the residual function can be fitted using the scattered deviation values. Figure6(b) shows the fitted residual function curves of perimeter and area. The residual value(perimeter or area) will be compensated during the statistical process for corresponding mask’s equivalent projected area diameter according to the residual function.

    • Except for the particles at the edge of the image, each particle in the original image (2 048×2 048 pixel) is classified, and their masks’ information is extracted. There are two methods to calculate the size of a particular particle. One is to calculate the equivalent projected area diameter, corresponding to the diameter of a circle with the same projected area as the particle. However, as nearly 30% particles are occluded in one image, it’s inaccurate to use equivalently projected area diameter. Another is to calculate the minimum circumscribed circle diameter. This descriptor can provide the diameter for the “inc_sphere” particles, which account for 90% of the occluded particles. The “ins_sphere” particles, accounting for 3% in all particles, are abandoned in the statistics of PSD due to the limitation of 2-dimensional images.

      The particle’s degree of sphericity is calculated using the following formula (also called root of form factor)[28]:

      $$DS = \frac{{2\sqrt {\pi A} }}{P}$$ (3)

      where $A$ is the projected area of the particle, and $P$ is the perimeter of the particle periphery. This index, sensitive to boundary irregularity, represents how the shape of the particle deviates from a standard circle.

      The spheroidization ratio is defined as follow:

      $$SR = \left(1 - \dfrac{{{N_{non}}}}{{{N_{all}}}}\right) \times 100 {\text{%}} $$ (4)

      where ${N_{non}}$ is the number of “non_phere” particles, and ${N_{all}}$ is the number of all particles.

    • The output image of our model is shown in Fig.7(d). Each recognized particle is labeled and colored, where the label describes the class and probability that the particle belongs to. The color represents the class intuitively. Here, green is for “com_sphere”, blue for “inc_sphere”, orange for “ins_sphere”, and red for “non_sphere”, respectively. We use the Phenom ProSuite Software Particlemetric, a professional microscopy image processing software, to compare our proposed method with traditional image segmentation method (Fig.7(c) and 7(e)). Recognition accuracy of the proposed method is 96.95%, higher than that of 78.44% by Particlemetric.

      Figure 7.  Predicted results and comparation with the Phenom ProSuite Software Particlemetric. (a) Raw image; (b) Output segmentation result of Particlemetric; (c) Four enlarged details region of (b); (d) Output result of proposed method; (e) Four enlarged details region of (d)

      As the non-sphere particles have complicated random surface texture and shape, traditional method shows poor recognition ability on them. In comparison, the proposed model can recognize these non-sphere particles correctly via learning deeply the complex feature, where some small spherical particles adhering to the non-sphere particle can also be detected. Moreover, it is hard to separate two particles with position closed to or overlapped to each other by the traditional method (Fig.7(c)), which can be easily segmented by our system and tell which one is un-occluded.

    • The statistics results of 8 raw images in PSD and DSD are shown in Fig.8. Total 9374 particles were detected by the Particlemetric, less than that of 12192 by our methods. Most of the small particles are attached and detected as one part of the big ones in the traditional method (Fig.7(e)), which can be solved by our method. Non-sphere particles were recognized as many tiny small particles with less than 5 µm by the traditional method (Fig.7(c)), which was not shown in in the corresponding PSD results (Fig.8(a)). This inconsistence is due to the fact that the particle with smaller size will be considered as noise, according to the image resolution of the software.

      Obvious difference in PSD is shown between methods by laser diffraction (using LS 13 320 Tornado) and microscopy image particle segmentation (Fig.8(a)). This is attributed to the satellite particle, observed in the formation of small particles[29], which can adhere to the larger particles. However, all separated particles are treated as the perfect sphere by the laser diffraction method, even though the separated particles are with many tiny appendages. In this respect, satellite ratio of the powder can be evaluated according to the deviation of PSD measured by the proposed segmentation method and laser diffraction, since that there is no good method to characterize the satellite ratio now. As the in-complete sphere particles are not calculated during the statistical process of degree of sphericity, significant difference in the statistical results of degree of sphericity are shown between the traditional method and our model (Fig.8(b)).

      Figure 8.  Statistical analysis results and comparation. (a) PSD results measured by the Particlemetric, our method and laser diffraction technique, respectively; (b) Degree of sphericity distribution (DSD) results measured by Particlemetric and proposed method

      Unlike the other two methods, spheroidization ratio can be provided by the proposed model, where 646 non-sphere particles among total 12192 particles in the 8 raw images can be calculated, corresponding to a SR value of 94.70%.

    • In this study, a spherical particle image segmentation and auto-statistics system is proposed by employing deep learning and mask merging techniques. The proposed method can recognize particles with four typical shapes and extract their feature and size information, before providing the particle size distribution, degree of sphericity and spheroidization ratio of the powder. Superior to the existing method of image analysis and laser diffraction, the proposed method can also detect overlapped spherical particles with high accuracy, automatically calculating the spheroidization ratio of powder, and providing an orientation in the measurement of satellite ratio of the spherical powder. Besides providing accurate particle size and shape information during the production process of spherical power, the proposed method can also be extended to a large variety of particles.

参考文献 (29)

目录

    /

    返回文章
    返回