Tuesday, June 4, 2019

Image Pre-compensation for Ocular Aberrations

pic Pre-compensation for Ocular AberrationsIntroductionMotivationOn- try out doubling pre-compensation has good prospect with the increasing usage of various discover screen devices in our daily life. Comparing to glasses, contact glasses and ocular surgery, on-screen image pre-compensation brush off be easily carried out by calculator computer science without any irreversible interchange in the midriffs, as long as the ocular optical aberration is known. Further, since neither contact lenses nor glasses are advised to be worn all of the clock, on screen pre-compensation could even supplement glasses and contact lens use. It is known that compensation for high aberrations can lead to super- perceive, which is the unquiet limit of human eye. On-screen compensation also has the prospect of achieving this with customized screens in the foreseeable future.Image Processing TheoriesHuman Visual SystemThe human opthalmic governing body is the combination of the optical system o f the eye, and the neural processing of the high spirits information received Roorda (2011), in which the latter is out of the concern of this interrogation. The optical system of the eye is an intricate construction including the pupil, cornea, retina and lens (see Fig.1). The light come through the pupil is refracted by the lens and make an antonym image on the retina. During this process, any deficit would cause aberrations. For instance, myopia may result from the lens that the refraction is too high or that the keep from the lens and retina is too long.Fig.1 Cross-section of eye body buildingThere is a limit resolution dominated by the neural receptor on the retina, which is below the diffraction limit. Although even for normal emmetropic look the sight is below neural limit and diffraction limit due to the minor deficit of eye structure. Austin (2011) For eyes with refractive issues, caused by cornea or lens from an ideal world-wide shape, the aberrations would signi ficantly dominate over this limit. Thus, in the following research, we shall omit the neural limitation. To increase the efficiency in the following, we can simply model the eye structure as such a lens (regarding the cornea and the lens as a whole) with an adjustable surface (pupil coat) and an image plane (retina).Point Spread Function and image qualityAs is stated in the previous section the aberrations would come from any deficit of eye structure. In rank to quantify the distortion in mathematical means, we introduce the Point Spread Function (PSF). Fundamentally, the PSF is defined as a function describes the response of an imaging system to a get beginning or forecast object. Note that the loss of light would non be considered in the PSF. Then, if we consider the PSF does not change across the field of view, which applies to the central 1-2 of visual angle Reference, the image can be expressed by the convolution of the PSF and the object in this area. (1)Where denotes the convolution algorithm. Note that the deconvolution method is based on the inverse operation of Eq.1, which allow for be introduce in Section 1.2.4.Fig.2 A contrast of PSF and MTF of an ideal emmetropic eyes (up) and a typical myopic eyes of -1.00 dioptre (down)Now we introduce two functions that can show the quality of the image Optical Transfer Function (OTF) and the Modulation Transfer Function (MTF). Either OTF or MTF specifies the response to a periodic sine-wave pattern passing through the lens system, as a function of its spatial frequency or period, and its orientation WIKI. The OTF is the Fourier render of the PSF, and the MTF is the real magnitude of the OTF. In a 2d system, these two functions are defined as (2)Where denotes the Fourier transmogrify, and denote the phase space and Euclidian space, respectively. (3)Where means winning the absolute value.Zernike PolynomialsThe Zernike polynomials are a sequence of polynomials that are orthogonal over circular pup ils. Some of the polynomials are related to classical aberrations. In optometry and ophthalmology, Zernike polynomials are the timeworn way to describe aberrations of the cornea or lens from an ideal spherical shape, which result in refraction illusions WIKI.The definition of orthogonal Zernike Polynomials recommended in an ANSI standard is represented as(4)Where m and n denote the radial degree and the azimuthal frequency, respectively. The radial polynomials are defined asAnd the triangular functions(6)Note that nm and nm essential be even.The relationship between double index (m, n) and single index (i)Table.1 Eye aberrations presented by Zernike PolynomialsAberrations are expressed as the distortion of the wave former as it passes through the eye. As is stated, Zernike polynomials are the standard way Campbell (2003) of quantifying this distortion. The aperture function (or pupil function) can link Zernike polynomials with the PSFWhere denotes complex aperture function (or p upil function). denotes the phase of the wavefront, and the i is the unreal unit and denotes the amplitude function, which is usually one inside the circular pupil and zero on the outside. The PSF can be expressed as the square of Fourier qualify of the complex aperture functionWe now know that the PSF can be calculated with a known wavefront and the distortion of the wavefront caused by refractive error can be actually represented by several orders of Zernike Polynomials with opposite amplitudes, which can be precisely legal professiond with a Shack-Hartmann wavefront analyser device.Deconvolution method actingWe introduce a way to pre-process the image to neutralize the aberration caused by eyes, which is also called image pre-compensation. Simplistically, to compensate them in advance to proactively counteract degradations resulting from the ocular aberrations of different users.Point Spread Function (PSF) is defined as a function describes the response of an imaging system to a point source or point object. The sinusoidal function is an eigenstate of the PSF (i.e. if the input image is a sinusoidal function, no matter what the PSF is, the output image would also be a sinusoidal function)The Image on the retina (or) can be linked with PSF by convolution as shown in Eq.1. Then we do Fourier transform on both side of the compareNote the convolution has changed to multiplication in the phase space. If we define a new OBJ asThe new image isThis means If we can process the OBJ as defined, we will pay back the intended image in the observers eyes. To form the OBJ we introduce Minimum Mean Square Error filtering (or Wiener Filter)Where K is a constant. reckoning TheoriesFast Fourier TransformAs is shown in previous sections, we use two algorithms that require an amount of calculation, which is Fourier transform (inverse Fourier transform) and convolution. Since computer images can be seen as 2-demension lattices, we will use 2d Discrete Fourier TransformIt is known that this process requires a significant amount of calculation. The conventional way of doing this would take a long time for regular PC. However, for research subscribe to, we will need to do this calculation in real-time. Thus, we introduce the Fast Fourier Transform (FFT). A definition of FFT could be An FFT is an algorithm computes the discrete Fourier transform (DFT) of a sequence or its inverse. Fourier analysis converts a signal from its original firmament (often time or space) to representation in the frequency domain and vice versa. An FFT rapidly computes such transformations by factorizing the DFT matrix into a product of sparse (mostly zero) factors. Van Loan (1992)Also, all convolution within our program will be calculated by means of the FFT through the following equation(16)Fig.3 A contrast of the speed of two means of calculation with respect of data length.The purport of doing so is to accele respect the speed of calculation, since the conventional way of calculating convolution is much slower than the FFT. This difference of speed is shown in Fig.3.Nyquist LimitAs is stated, we need the image and the PSF to before doing the pre-compensation. The PSF is calculated by aperture function Eq.9. To simulate the pupil, we can use a circular aperture. However, this circular pupil has some restrictions in computer framework, which is the Nyquist limit.In signal processing if weIf we want to reconstruct all Fourier components of a periodic waveform, there is a restriction that the sampling rate needs to be at least twice the highest waveform frequency. The Nyquist limit, also known as Nyquist frequency, is the highest frequency that can be coded at a given sampling rate in order to be able to fully reconstruct the signal, which is half of the sampling rate of a discrete signal processing system. Cramr Grenander (1959)For our simulation the sampling rate n is represented asAliasing will occur when .Psychometric TheoriesIn order to quanti fy the enhancement of the Deconvolution Method to the subjects, we need to measure the change of the thresholds of the eyes before and after the compensation. Specifically, in our research we need to find out the threshold of minimum contrast and size of an image that the subjects can correctly recognize. This requires the use of some psychometric theories.Adaptive Staircase MethodThe staircase method is a widely used method in psychophysics running play. The point of staircase method is to adjust the intensity of stimuli according to the response of the participant. To illustrate this method we shall use an example introduced by Cronsweet (1962)Suppose the problem is to determine Ss absolute, intense threshold for the sound of a click. The first stimulus that E delivers is a click of some arbitrary intensity. S responds either that he did or did not hear it. If S says yes (he did hear it), the next stimulus is made less intense, and if S says no, the second stimulus is made more intense. If S responds yes to the second stimulus, the terce is made less intense, and if he says no, it is made more intense. This procedure is simply continued until some predetermined criterion or number of trials is reached. The results of a serial publication of 30 trials are shown in Fig.4. The results may be recorded directly on graph-paper doing so helps E keep the procedure straight.Fig. 4 An example course by Cornsweet (1962)There are four important characteristic of adaptive staircase method (1) Starting value (2) Step-size (3) Stopping condition and (4) Modification of step-sizes. Cornsweet 1962The commencement value should be near the threshold value. As is shown in Fig.4, the starting point determines how many step until it reach a level that near the threshold. The test will be most efficient if the starting value is near to that threshold.The step-size is 1 db for the example test. Step-size should meet the requirement that it is neither too volumed that not able to measure the threshold accurately nor too small to slow down the test process. It is advised that the step-size would be the most effective when it is the size of the differential threshold.The result with the staircase method would be like Fig.4 in general when it hover around a certain level of intensity of stimuli. When reached this asymptotic level, the trails should be taken into account. An efficient way is to set a number of trails that need to be record and start to count after it reach the asymptotic level.Under some conditions, the step-size need to be changed during the test. For careful investigateal design, the first stimulus in each of the staircases are at same intensity-level. Cornsweet 1962 However, then the complete(a) level would be too far from the final level. This can be avoided by using large steps at the start, and smaller steps when it address the final level. For instance, this can be done by decrease the step from 3db to 1db at the third reversal.It should be stated that the adaptive staircase method is a genuinely efficient way of measurement. For a given reliability of a computed threshold-value, the staircase-method requires the presentation of many fewer stimuli than any other psychophysical method.Related Work oecumenic image compensation has long been used since the invention of lens. The invention of the computer and portable display devices make it easier to perform on-screen image pre-compensation. On-screen compensation has the advantage of whatsis in that it can easily be carried out with any display-screen device that can compute. In addition, acuity limits in the human vision on the fovea are found to be between 0.6 and 0.25 arc minutes Schwiegerling 2000, which is better than the typical acuity of emmetropic eyes Pamplona 2012. This means that effective compensation may increase the performance of emmetropic eyes.Deconvolution MethodOn screen image pre-compensation is based on the idea that the aberrations can be neutralized by pre-compensating the object. Specifically, it requires dividing the Fourier transform of uncorrected image by the Fourier transform of the PSF (i.e. the OTF). A detailed derivation can be found at section1.2.4. Early research by Alonso and Barreto (2003) tried and true subjects with defocus aberration using this method. Their results showed an improvement in observers visual acuity compared to non-corrected images.However, in practicable use, for example, defocus, the defocus magnitude (in dioptres) as well as the pupil size, wavelength and viewing distance (visual angle) is required to calculate and scale the PSF, which means measurement and substitution of these parameters are also required to deliver the intended compensation.Enhancement of Deconvolution Method fresh research has further improved the deconvolution method. Huang et al (2012) carried out work with dynamic image compensation. They fixed the viewing distance from the screen and measured the real-t ime pupil size with the help of a Tobii T60 eye tracker device. Then they compensated the image with this real-time pupil size data. The reliability and acuity were improved by this dynamic compensation. contrary perfect eyes, for which bigger pupil size would lead to smaller diffraction limited PSF, for most eyes, a bigger pupil size would lead to an increase in aberrations. That is also why dynamic compensation is important.As is mentioned in previous section, the principle of pre-compensation is to divide the Fourier transform of the image by the Fourier transform of the OTF. In order to avoid near-zero values in the OTF, most of the research used Minimum Mean Square Error filtering (Wiener filter). However, the outcome usually suffers from an apparent loss of contrast.Recent research has revealed other ways to optimize the compensation to have higher contrast and sharper boundaries. The multi-domain approach was introduced by Alonso Jr et al. (2006). They claimed that there are unnecessary move in pre-compensated image. Simplistically, there is compensation that is irrelevant with respect to the important information in the image. This work showed an improvement of acuity using this method with respect to recognising text. much recently, Montalto et al. (2015) applied the total variation method to process the pre-compensated image. The result is slightly better but still suffers from a trade-off between contrast and acuity. Fundamentally, the afflicted human eye can be seen as a low-pass filter, and either an increase of image aliasing or a decrease of contrast is inevitable.Other ApproachesThe research described above can be seen as an enhancement and a supplement of the original method carried out by Alonso (2003). However, as is stated, there is a limit of image pre-compensation by the PSF deconvolution method. Others has studied other on-screen methods to achieve a better outcome. Huang et al. (2012) introduced a multilayer approach based on the dra wback of normal on screen pre-compensation that was shown by Yellot and Yellot (2007). This method is based on the deconvolution method, but uses a double-layer display rather than normal display. According to Fig.2, if we have two separated displays, then we have two different MTF curve. Then, the near-zero gap in MTF can be filled. This approach has showed a demonstrable improvement of contrast and brightness in their simulation. However, it required a transparent front display that does not block the light from the rear display at all, which is not plausible in practical use.Later, Pamplona et al. (2012) investigated a light field theory approach and built a monochrome dual-stack-LCD display (also known as parallax barriers) prototype and a lenticular-based display prototype to form directive light. Huang et al. (2014) restated the potential of using light field theory on image compensation and built another prototype with a parallax barrier mask and higher resolution. The outco me of both methods were similar. They could produce colour images with only a little decrease in contrast and acuity. However, it should be stated that both methods were carried out with a fixed directional light field, which used a fixed camera to photograph the intended corrected image. It is obvious that is not feasible in practical use with moving observer. Adjustable directional light has not been implemented due to the limits imposed by diffraction and resolution. In addition, there are minor issues on the loss of brightness as well in these research.Overall, the most applicable way of on-screen image compensation is still deconvolution method. The light field method requires very precise eye tracking to inject the light into pupil, while deconvolution only requires the observer to keep a certain distance and to be in front of the pre-compensated image.MethodSubjectsImplementationWe built a program for the test that can proceed the pre-compensation in real-time using deconvolu tion method. This program can pre-compensate any aberration that can be represented by Zernike polynomialsThe essay is based on adaptive staircase method. During the experiment, the program shows optotype Landolt-C in four directions (i.e. up, down, left and right) which is randomly generated at each trail. The subjects choose the direction of the Landolt-C.Staircase This research intend to find two thresholds contrast and size. Though the We shall describe the staircase method for the contrast threshold. The experiment for size threshold is taken likewise.The four characteristic for our adaptive staircase method areThe start value is relatively large since the subjectThe step-sizeThe experiment ends in N trials after it reached the final levelFor our research, we cannot determine an ideal starting value because the subjects have different type and intensity of aberration. Thus, we have to change the size-step to make our experiment efficient.The threshold is calculated using the r ecord the last N trails of the experiment, which is determined by the following equationEq.()The program was design as such that Assumptions, Approximations and LimitationsAssumption About SubjectsLimitation Polychromatic issues, No. of Pixels, StaircaseReferencesAlonso, M., Barreto, A. B. (2003, September). Pre-compensation for high-order aberrations of the human eye using on-screen image deconvolution. In Engineering in euphony and Biology Society, 2003. Proceedings of the 25th Annual International Conference of the IEEE (Vol. 1, pp. 556-559). IEEE.Alonso Jr, M., Barreto, A., Jacko, J. A., Adjouadi, M. (2006, October). A multi-domain approach for enhancing text display for users with visual aberrations. In Proceedings of the 8th outside(a) ACM SIGACCESS conference on Computers and accessibility (pp. 34-39). ACM.Campbell, C. E. (2003). A new method for describing the aberrations of the eye using Zernike polynomials. Optometry Vision Science, 80(1), 79-83.Cornsweet, T. N. (1962 ). The staircase-method in psychophysics. The American journal of psychology, 75(3), 485-491.Harvey, L. O. (1986). Efficient estimation of sensory thresholds. Behavior Research Methods, Instruments, Computers, 18(6), 623-632.Huang, F. C., Wetzstein, G., Barsky, B. A., Raskar, R. (2014). Eyeglasses-free display towards correcting visual aberrations with computational light field displays. ACM Transactions on Graphics (TOG), 33(4), 59.Huang, J., Barreto, A., Adjouadi, M. (2012, August). Dynamic image pre-compensation for computer access by individuals with ocular aberrations. In 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society (pp. 3320-3323). IEEE.Montalto, C., Garcia-Dorado, I., Aliaga, D., Oliveira, M. M., Meng, F. (2015). A total variation approach for customizing imagery to improve visual acuity. ACM Transactions on Graphics (TOG), 34(3), 28.Pamplona, V. F., Oliveira, M. M., Aliaga, D. G., Raskar, R. (2012). Tailored displays to compensate for visual aberrations.Roorda, A. (2011). Adaptive optics for studying visual function a comprehensive review. diary of vision, 11(5), 6-6.Schwiegerling, J. (2000). Theoretical limits to visual performance. Survey of ophthalmology, 45(2), 139-146.Yellott, J. I., Yellott, J. W. (2007, February). Correcting spurious resolution in defocused images. In Electronic Imaging 2007 (pp. 64920O-64920O). International Society for Optics and Photonics.Young, L. K., Love, G. D., Smithson, H. E. (2013). Different aberrations raise contrast thresholds for single-letter realization in line with their effect on cross-correlation-based confusability. Journal of vision, 13(7), 12-12.Van Loan, C. (1992). Computational frameworks for the fast Fourier transform (Vol. 10). Siam.Cramr, H., Grenander, U. (1959). Probability and statistics the Harald Cramr volume. Almqvist Wiksell.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.