In this paper, we develop computational procedures to approximate the spectral abscissa of the switched linear system via square coordinate transformations. First, we design iterative algorithms to obtain a sequence of the least μ1 measure. Second, it is shown that this sequence is convergent and its limit can be used to estimate the spectral abscissa. Moreover, the stopping condition of Algorithm 1 is also presented. Finally, an example is carried out to illustrate the effectiveness of the proposed method.
We present a lower and an upper bound for the second smallest eigenvalue of Laplacian matrices in terms of the averaged minimal cut of weighted graphs. This is used to obtain an upper bound for the real parts of the non-maximal eigenvalues of irreducible nonnegative matrices. The result can be applied to Markov chains.
For a graph property $\mathcal {P}$ and a graph $G$, we define the domination subdivision number with respect to the property $\mathcal {P}$ to be the minimum number of edges that must be subdivided (where each edge in $G$ can be subdivided at most once) in order to change the domination number with respect to the property $\mathcal {P}$. In this paper we obtain upper bounds in terms of maximum degree and orientable/non-orientable genus for the domination subdivision number with respect to an induced-hereditary property, total domination subdivision number, bondage number with respect to an induced-hereditary property, and Roman bondage number of a graph on topological surfaces.
It is one of the fundamental and challenging problems to determine the node numbers of hidden layers in neural networks. Various efforts have been made to study the relations between the approximation ability and the number of hidden nodes of some specific neural networks, such as single-hidden-layer and two-hiddenlayer feedforward neural networks with specific or conditional activation functions. However, for arbitrary feedforward neural networks, there are few theoretical results on such issues. This paper gives an upper bound on the node number of each hidden layer for the most general feedforward neural networks called multilayer perceptrons (MLP), from an algebraic point of view. First, we put forward the method of expansion linear spaces to investigate the algebraic structure and properties of the outputs of MLPs. Then it is proved that given k distinct training samples, for any MLP with k nodes in each hidden layer, if a certain optimization problem has solutions, the approximation error keeps invariant with adding nodes to hidden layers. Furthermore, it is shown that for any MLP whose activation function for the output layer is bounded on R, at most k hidden nodes in each hidden layer are needed to learn k training samples.
The paper deals with the existence of a quasi continuous selection of a multifunction for which upper inverse image of any open set with compact complement contains a set of the form $(G\setminus I)\cup J$, where $G$ is open and $I$, $J$ are from a given ideal. The methods are based on the properties of a minimal multifunction which is generated by a cluster process with respect to a system of subsets of the form $(G\setminus I)\cup J$.
Let $R$ be an integral domain with quotient field $K$ and $f(x)$ a polynomial of positive degree in $K[x]$. In this paper we develop a method for studying almost principal uppers to zero ideals. More precisely, we prove that uppers to zero divisorial ideals of the form $I = f(x)K[x] \cap R[x]$ are almost principal in the following two cases: – $J$, the ideal generated by the leading coefficients of $I$, satisfies $J^{-1} = R$. – $I^{-1}$ as the $R[x]$-submodule of $K(x)$ is of finite type. Furthermore we prove that for $I = f(x)K[x] \cap R[x]$ we have: – $I^{-1}\cap K[x]=(I:_{K(x)}I)$. – If there exists $p/q \in I^{-1}-K[x]$, then $(q,f)\neq 1$ in $K[x]$. If in addition $q$ is irreducible and $I$ is almost principal, then $I' = q(x)K[x] \cap R[x]$ is an almost principal upper to zero. Finally we show that a Schreier domain $R$ is a greatest common divisor domain if and only if every upper to zero in $R[x]$ contains a primitive polynomial.