Now, just google your phone

Leave a Comment
Losing your phone can happen easily. Unfortunately, finding it again can often be not so easy. Google has just made the search process a little easier for Android users, however. Now they need only type "find my phone" into Google Search using a desktop browser.
In order for the functionality to work, users must be logged into the same Google account on the browser as they are on their phone. They must also have the latest version of the Google app installed on their device. Assuming all that is the case, a "Find your phone" map will appear at the top of the Google Search results.
         

Once Google has successfully pinpointed the phone in question (as long as it's powered on, this should work), it displays the location of the device on a map and offers to ring it. Ringing it, of course, helps in those occasions that your phone has slipped behind a couch cushion or whatnot, while displaying it on map is useful if you've left it somewhere like a coffee shop or bar.
Previously, it was necessary to install the Android Device Manager app to locate a lost Android phone (assuming you hadn't turned the service off, as it's on by default). Plenty of users simply won't have been aware of those possibilities – and this new approach, which is simply a web-based version of Android Device Manager, makes it easy for anyone to locate a lost device.



Obtaining HDR images is now a faster process.

Leave a Comment
Researchers from the Optics Department at the U. of Granada have developed a new algorithm for the capture of high dynamic range (HDR) images which reduces the time of capture or the level of noise in the resulting image. Beyond the field of photography, this new development can also be applied to artificial vision systems, medical imaging, control quality systems in assembly lines, satellite images, vigilance systems and assisted or automatic driving systems for vehicles, etc.
This research has been published in the journal Applied Optics, and it has facilitated the generation of an algorithm which, based on the way in which each camera responds to light, adapts the times of capture of different images in an automatic and instantaneous fashion, in order to reduce the total capture time, thus adapting to the conditions and needs of each captured scene.




Although the most striking and visually attractive application of HDR techniques is in the field of photography, these techniques are particularly relevant in robotic vision systems. "In the same way in which the human visual system has a high dynamic range, artificial vision systems also need this sort of techniques to behave in a similar, or even more sophisticated, way vis à vis our human visual system", according to the author of this research, Miguel ÁngelMartínez Domingo, from the Optics Department, U. of Granada.



The exposure time is the time in which the camera sensor is exposed to light during image capture. In a short exposure, the dark zones of the scene appear overexposed (black) in the final image. In a long exposure, by contrast, the most luminous zones in the scene appear saturated or burnt out (black)


Sub exposed or saturated zones

"In general, in any scene captured nowadays, and even though our camera automatically adjusts the time of exposure, there will always be sub exposed or saturated zones. This happens due to the fact that the range of luminosities (or dynamic range) which the sensor in any conventional camera can capture correctly in a single shot is smaller than the actual dynamic range of the scene itself. This is where HDR capture techniques make sense", according to Martínez Domingo.

What really matters is not just that the final image turn out to be nice or realistic for a human eye: it can be important, for instance, to highlight with a low degree of noise the details of a very bright component and another dark one in an integrated circuit. Consequently, the levels of noise in the resulting image are incompatible with the total time of capture for the different images required.

This algorithm developed by U. of Granada scientists allows for the optimization of the balance between a reduced time of capture and a lower level of noise in the resulting image. The level of noise becomes lower the more exposures are captured to make up the HDR image, but the use of many different exposures would excessively increase the time of capture. "The new algorithm does not just adapt itself to the camera we are using and the scene we are capturing, but also to the specific needs of the application we are developing," according to Martínez Domingo.


Consequently, "for the first time an HDR image capture algorithm adapts itself to any camera, any scene, and any application, in an easy and optimal way, without having to use complex optic systems or non-conventional sensors architectures".



Powered by Blogger.