Hypothesis of Neuron Activation According to the Laws of Symmetry

Authors

  • K. N. Maiorov
  • A. G. Lozhkin

DOI:

https://doi.org/10.22213/2410-9304-2019-2-43-49

Keywords:

neural networks, activation function, automorphism, groups of neurons, formal languages

Abstract

The paper discusses the main activation functions in modern neural networks and their disadvantages. It is concluded that all of them have one drawback, which is the inability to interpret the received signals, these are just normalized values of the weighted sum of synapses. A table of symmetries (automorphisms) and their role in semiotic analysis and linguistics are considered. Linguistics contains universals, which, even in the superficial analysis, are symmetries. Therefore, semiotic analysis is a mathematical method, just as linguistics is an exact science, subject to the laws of set theory and universal algebra. An assumption is made about the possibility of using pragmatic analysis and the mechanism of symmetries in neural networks. A new approach is proposed, which includes the grouping of neurons in the hidden layer by the form of symmetry (automorphism) and the use of three-phase activation functions for each group, which characterize the manifestation of automorphism properties of this group. Each group of neurons has its own memory for storing frequent signals, which further generate symbol chains. At the initial stage, two groups of symmetries are taken - reversible and mirror. The proposed approach can make neural networks more accessible for understanding because of the interpretability of signals.

References

Созыкин А. В. Обзор методов обучения глубоких нейронных сетей // Вестник ЮУрГУ. Серия: Вычислительная математика и информатика. 2017. Т. 6. № 3. С. 28–59. DOI: 10.14529/cmse170303.

Goodfellow I., Bengio Y., Courville A. Deep Learning. The MIT Press, 2016, pp. 84-91.

He K., Zhang X., Ren S., Sun J. Deep residual learning for image recognition. In Proceedings of CVPR, pp. 770–778, 2016 URL: https://arxiv.org/abs/ 1512.03385 (дата обращения: 22.03.2019).

Krizhevsky A., Sutskever I., Hinton G. Imagenet classification with deep convolutional neural networks // Advances in Neural Information Processing Systems (NIPS), pp. 1097–1105, 2012.

Ramachandran, B. Zoph, Q. V. Le. Searching for activation functions. CoRR, 2017. URL: https://ar-xiv.org/ abs/ 1710.05941 (дата обращения: 23.03.2019).

Рудой Г. И. Выбор функции активации при прогнозировании нейронными сетями // Машинное обучение и анализ данных. 2011. T. 1. № 1. C. 16–39.

He K., Zhang X., Ren S., Sun J. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification // IEEE International Conference on Computer Vision (ICCV), pp. 1026–1034, 2015. URL: https://arxiv.org/abs/1502.01852 (дата обраще-ния: 22.03.2019).

Xu B., Wang N., Chen T., Li M. Empirical Evaluation of Rectified Activations in Convolution Network // ICML Deep Learning Workshop, 2015. URL: http://arxiv.org/abs/1505.00853 (дата обращения: 23.03.2019)

Ложкин А. Г. Симметрия как единое свойство пространства и живого организма // Тиетта. 2010. № 3 (13). С. 23–32.

Bozek P., Lozhkin A., Galajdova A., Arkhipov I., Maiorov K. Information technology and pragmatic analysis. Computing and informatics. 2018. Vol. 37, Issue 4, pp. 1011-1036.

He K., Zhang X., Ren S., Sun J. Deep residual learning for image recognition. In Proceedings of CVPR, pp. 770–778, 2016 URL: https://arxiv.org/abs/ 1512.03385 (дата обращения: 22.03.2019).

Волкова И. А., Вылиток А. А., Руденко Т. В. Формальные грамматики и языки. Элементы теории трансляции : учеб. пособие для студентов II курса. Изд. 3, перераб. и доп. М. : Издательский отдел факультета ВМиК МГУ им. М. В. Ломоносова, 2009, C. 5–20.

Александров П. С. Лекции по аналитической геометрии, пополненные необходимыми сведениями из алгебры : С прил. собр. задач, снабжен. решениями, сост. А. С. Пархоменко. М. : Наука, 1968. 911 с.

Ложкин А. Г., Майоров К. Н. О некоторых проблемах разработки автономных роботов // Вестник ИжГТУ имени М. Т. Калашникова. 2017. № 4. С. 114–116. DOI 10.22213/2413-1172-2017-4-114-116.

Published

05.07.2019

How to Cite

Maiorov К. Н., & Lozhkin А. Г. (2019). Hypothesis of Neuron Activation According to the Laws of Symmetry. Intellekt. Sist. Proizv., 17(2), 43–49. https://doi.org/10.22213/2410-9304-2019-2-43-49

Issue

Section

Articles