Our study concerned the findings that rat and rabbit heart transplants do not survive after six hours. They become dark, hard and fail to contract within 2 min after reperfusion and never regain their function. We tested the supplementation of solutions for heart transplant preservation with tetrahydrobiopterin (H4B) and L-arginine (L-ARG) to maintain the oxidative and reductive domains of the endocardial NO synthase. We decided to study the excised rabbit hearts preserved in Hank’s balanced salt solution (HBSS) at 0 °C supplemented with different concentrations of H4B (0, 1, 5, 10 or 100 /¿M). At desired time intervals, successive pieces stored in the above solutions were warmed to rabbit body temperature in 4 ml of HBSS and maximally agonized by direct application of 20 l of 200 M bradykinin (or other agonist) onto the exposed endocardium. Nitric oxide bursts were monitored with a porphyrinic NO sensor lying on the exposed endocardium. Our goal was to find the lowest H4B concentration which would maximally agonize NO * and prolong the time of heart preservation to more than 6 hours. Ten /iM are a minimum H4B concentration which achieves maximum prolongation of heart preservation time up to 90 hours. This effect was based upon maximal potentiation of NO* release and minimizing of superoxide production.
We provide new sufficient conditions for the convergence of the secant method to a locally unique solution of a nonlinear equation in a Banach space. Our new idea uses “Lipschitz-type” and center-“Lipschitz-type” instead of just “Lipschitz-type” conditions on the divided difference of the operator involved. It turns out that this way our error bounds are more precise than the earlier ones and under our convergence hypotheses we can cover cases where the earlier conditions are violated.
Feature reduction is an important issue in pattern recognition. Lower feature dimensionality could reduce the complexity and enhance the generalization ability of classifiers. In this paper we propose a new supervised dimensionality reduction method based on Locally Linear Embedding and Distance Metric Learning. First, in order to increase the interclass separability, a linear discriminant transformation learnt from distance metric learning is used to map the original data points to a new space. Then Locally Linear Embedding is adopted to reduce the dimensionality of data points. This process extends the traditional unsupervised Locally Linear Embedding to supervised scenario in a clear and natural way. In addition, it can also be seen as a general framework for developing new supervised dimensionality reduction algorithms by utilizing corresponding unsupervised methods. Extensive classification experiments performed on some real-world and artificial datasets show that the proposed method can achieve comparable to or even better results over other state-of-the-art dimensionality reduction methods.