History of Diamond

History of Diamond

The name diamond is derived from the ancient Greek αδάμας (adámas), “proper”, “unalterable”, “unbreakable”, “untamed”, from ἀ- (a-), “un-” + δαμάω (damáō), “I overpower”, “I tame”.[3] Diamonds are thought to have been first recognized and mined in India, where significant alluvial deposits of the stone could be found many centuries ago along the rivers Penner, Krishna and Godavari. Diamonds have been known in India for at least 3,000 years but most likely 6,000 years.[4]

Diamonds have been treasured as gemstones since their use as religious icons in ancient India. Their usage in engraving tools also dates to early human history.[5][6] The popularity of diamonds has risen since the 19th century because of increased supply, improved cutting and polishing techniques, growth in the world economy, and innovative and successful advertising campaigns.[7]

In 1772, the French scientist Antoine Lavoisier used a lens to concentrate the rays of the sun on a diamond in an atmosphere of oxygen, and showed that the only product of the combustion was carbon dioxide, proving that diamond is composed of carbon.[8] Later in 1797, the English chemist Smithson Tennant repeated and expanded that experiment.[9] By demonstrating that burning diamond and graphite releases the same amount of gas, he established the chemical equivalence of these substances.[10]

The most familiar uses of diamonds today are as gemstones used for adornment, and as industrial abrasives for cutting hard materials. The dispersion of white light into spectral colors is the primary gemological characteristic of gem diamonds. In the 20th century, experts in gemology developed methods of grading diamonds and other gemstones based on the characteristics most important to their value as a gem. Four characteristics, known informally as the four Cs, are now commonly used as the basic descriptors of diamonds: these are carat (its weight), cut (quality of the cut is graded according to proportions, symmetry and polish), color (how close to white or colorless; for fancy diamonds how intense is its hue), and clarity (how free is it from inclusions).[11] A large, flawless diamond is known as a paragon.

Evolution in Virtual Gaming

Evolution in Virtual Gaming

Virtual reality (VR) is a computer technology that uses software-generated realistic images, sounds and other sensations to replicate a real environment or an imaginary setting, and simulates a user’s physical presence in this environment to enable the user to interact with this space. A person using virtual reality equipment is typically able to “look around” the artificial world, move about in it and interact with features or items that are depicted. Virtual realities artificially create sensory experiences, which can include sight, touch, hearing, and, less commonly, smell. Most 2016-era virtual realities are displayed either on a computer monitor, a projector screen, or with a virtual reality headset (also called head-mounted display or HMD). HMDs typically take the form of head-mounted goggles with a screen in front of the eyes. Some simulations include additional sensory information and provide sounds through speakers or headphones.

Some advanced haptic systems in the 2010s now include tactile information, generally known as force feedback in medical, video gaming and military applications. Some VR systems used in video games can transmit vibrations and other sensations to the user via the game controller. Virtual reality also refers to remote communication environments which provide a virtual presence of users with through telepresence and telexistence or the use of a virtual artifact (VA), either through the use of standard input devices such as a keyboard and mouse, or through multimodal devices such as a wired glove or omnidirectional treadmills.

The immersive environment can be similar to the real world in order to create a lifelike experience—for example, in simulations for pilot or combat training, which depict realistic images and sounds of the world, where the normal laws of physics apply, or it can differ significantly from reality, such as in VR video games that take place in fantasy settings, where gamers can use fictional magic and telekinesis powers.

“No matter how old you are now. You are never too young or too old for success or going after what you want.

In 1938, Antonin Artaud described the illusory nature of characters and objects in the theatre as  in a collection of essays, Le Théâtre et son double. The English translation of this book, published in 1958 as The Theater and its Double, is the earliest published use of the term “virtual reality”. The term “artificial reality”, coined by Myron Krueger, has been in use since the 1970s. The term “virtual reality” was used in The Judas Mandala, a 1982 science fiction novel by Damien Broderick. The Oxford English Dictionary cites a 1987 article titled “Virtual reality”, but the article is not about VR technology. “Virtual” has had the meaning “being something in essence or effect, though not actually or in fact” since the mid-1400s, ” jim_smiling

Probably via sense of “capable of producing a certain effect. The term “virtual” was used in the computer sense of “not physically existing but made to appear by software” since 1959. The term “reality” has been used in English since the 1540s, to mean “quality of being real,” from “French réalité and directly Medieval Latin realitatem (nominative realitas), from Late Latin realis”.
Also notable among the earlier hypermedia and virtual reality systems was the Aspen Movie Map, which was created at MIT in 1978. The program was a crude virtual simulation of Aspen,

Colorado in which users could wander the streets in one of three modes: summer, winter, and polygons. The first two were based on photographs—the researchers actually photographed every possible movement through the city’s street grid in both seasons—and the third was a basic 3-D model of the city. Atari founded a research lab for virtual reality in 1982, but the lab was closed after two years due to Atari Shock (North American video game crash of 1983).

However, its hired employees, such as Tom Zimmerman, Scott Fisher, Jaron Lanier and Brenda Laurel, kept their research and development on VR-related technologies. By the 1980s the term “virtual reality” was popularized by Jaron Lanier, one of the modern pioneers of the field. Lanier had founded the company VPL Research in 1985. VPL Research has developed several VR devices like the Data Glove, the Eye Phone, and the Audio Sphere. VPL licensed the Data Glove technology to Mattel, which used it to make an accessory known as the Power Glove. While the Power Glove was hard to use and not popular, at US$75, it was early affordable VR device.

During this time, virtual reality was not well known, though it did receive media coverage in the late 1980s. Most of its popularity came from marginal cultures, like cyberpunks, who viewed the technology as a potential means for social change, and drug culture, who praised virtual reality not only as a new art form, but as an entirely new frontier. The concept of virtual reality was popularized in mass media by movies such as Brainstorm and The Lawnmower Man. The VR research boom of the 1990s was accompanied by the non-fiction book Virtual Reality by Howard Rheingold. The book served to demystify the subject, making it more accessible to researchers outside of the computer sphere and sci-fi enthusiasts.

Power of Capturing the Moment in Mobile Devices

Power of Capturing the Moment in Mobile Devices

A smartphone is a mobile phone with an advanced mobile operating system which combines features of a personal computer operating system with other features useful for mobile or handheld use. Smartphones, which are usually pocket-sized, typically combine the features of a cell phone, such as the ability to receive and make phone calls and text messages, with those of other popular digital mobile devices. Other features typically include a personal digital assistant (PDA) for making appointments in a calendar, media player, video games, GPS navigation unit, digital camera, and digital video camera. Most smartphones can access the Internet and can run a variety of third-party software components. They typically have a color graphical user interface screen that covers 70% or more of the front surface, with an LCD, OLED, AMOLED, LED, or similar screen; the screen is often a touchscreen.

In 1999, the Japanese firm NTT DoCoMo released the first smartphones to achieve mass adoption within a country. Smartphones became widespread in the 21st century and most of those produced from 2012 onwards have high-speed mobile broadband 4G LTE, motion sensors, and mobile payment features. In the third quarter of 2012, one billion smartphones were in use worldwide. Global smartphone sales surpassed the sales figures for regular cell phones in early 2013. As of 2013, 65% of U.S. mobile consumers own smartphones. By January 2016, smartphones held over 79% of the U.S. mobile market.


Devices that combined telephony and computing were first conceptualized by Nikola Tesla in 1909 and Theodore Paraskevakos in 1971 and patented in 1974, and were offered for sale beginning in 1993. Paraskevakos was the first to introduce the concepts of intelligence, data processing and visual display screens into telephones. In 1971, while he was working with Boeing in Huntsville, Alabama, Paraskevakos demonstrated a transmitter and receiver that provided additional ways to communicate with remote equipment, however it did not yet have general purpose PDA applications in a wireless device typical of smartphones. They were installed at Peoples’ Telephone Company in Leesburg, Alabama and were demonstrated to several telephone companies. The original and historic working models are still in the possession of Paraskevakos.

“It is the prerogative of wizards to be grumpy. It is not, however, the prerogative of freelance consultants who are late on their rent, so instead of saying something smart, I told the woman on the phone, “Yes, ma’am. How can I help you today?”
— Jim Butcher (Storm Front (The Dresden Files, #1))

In the late 1990s, many mobile phone users carried a separate dedicated PDA device, running early versions of operating systems such as Palm OS, BlackBerry OS or Windows CE/Pocket PC. These operating systems would later evolve into mobile operating systems. In March 1996, Hewlett-Packard released the OmniGo 700LX, which was a modified 200LX PDA that supported a Nokia 2110-compatible phone and had integrated software built in ROM to support it. The device featured a 640×200 resolution CGA compatible 4-shade gray-scale LCD screen and could be used to make and receive calls, text messages, emails and faxes. It was also 100% DOS 5.0 compatible, allowing it to run thousands of existing software titles including early versions of Windows. jim-butcher_photoby_dave_nelson

In August 1996, Nokia released the Nokia 9000 Communicator which combined a PDA based on the GEOS V3.0 operating system from Geoworks with a digital cellular phone based on the Nokia 2110. The two devices were fixed together via a hinge in what became known as a clamshell design. When opened, the display was on the inside top surface and with a physical QWERTY keyboard on the bottom. The personal organizer provided e-mail, calendar, address book, calculator and notebook with text-based web browsing, and the ability to send and receive faxes. When the personal organizer was closed, it could be used as a digital cellular phone. In June 1999, Qualcomm released a “CDMA Digital PCS Smartphone” with integrated Palm PDA and Internet connectivity, known as the “pdQ Smartphone”.

In early 2000, the Ericsson R380 was released by Ericsson Mobile Communications, and was the first device marketed as a “smartphone”. It combined the functions of a mobile phone and a PDA, supported limited web browsing with a resistive touchscreen utilizing a stylus. In early 2001, Palm, Inc. introduced the Kyocera 6035, which combined a PDA with a mobile phone and operated on Verizon. It also supported limited web browsing. In 2002, Handspring released the Treo 180, the first smartphone to combine Palm OS and a GSM phone, with telephony, SMS messaging and Internet access fully integrated into Palm OS.Smartphones before Android, iOS and BlackBerry, typically ran on Symbian, which was originally developed by Psion. It was the world’s most widely used smartphone operating system until the last quarter of 2010.

Use of Drone to Minimize the Violence in Highways

Use of Drone to Minimize the Violence in Highways

An unmanned aerial vehicle (UAV), commonly known as a drone, as an unmanned aircraft system (UAS), or by several other names, is an aircraft without a human pilot aboard. The flight of UAVs may operate with various degrees of autonomy: either under remote control by a human operator, or fully or intermittently autonomously, by onboard computers.

Compared to manned aircraft, UAVs are often preferred for missions that are too “dull, dirty or dangerous” for humans. They originated mostly in military applications, although their use is expanding in commercial, scientific, recreational, agricultural, and other applications, such as policing and surveillance, aerial photography, agriculture and drone racing. Civilian drones now vastly outnumber military drones, with estimates of over a million sold by 2015.


The term drone, more widely used by the public, was coined in reference to the resemblance of dumb-looking navigation and loud-and-regular motor sounds of old military unmanned aircraft to the male bee. The term has encountered strong opposition from aviation professionals and government regulators.

“Worker bees can leave.
Even drones can fly away.
The Queen is their slave.”
— Chuck Palahniuk

The term unmanned aircraft system was adopted by the United States Department of Defense and the United States Federal Aviation Administration in 2005 according to their Unmanned Aircraft System Roadmap 2005–2030. The International Civil Aviation Organization and the British Civil Aviation Authority adopted this term, also used in the European Union’s Single-European-Sky  Air-Traffic-Management Research roadmap for 2020. This term emphasizes the importance of elements other than the aircraft. It includes elements such as ground control stations, data links and other support equipment. A similar term is an unmanned-aircraft vehicle system remotely piloted aerial vehicle, remotely piloted aircraft system. Many similar terms are in use. chuck-p-todd-williamson-nymag

A UAV is defined as a “powered, aerial vehicle that does not carry a human operator, uses aerodynamic forces to provide vehicle lift, can fly autonomously or be piloted remotely, can be expendable or recoverable, and can carry a lethal or nonlethal payload”. Therefore, missiles are not considered UAVs because the vehicle itself is a weapon that is not reused, though it is also unmanned and in some cases remotely guided.

The relation of UAVs to remote controlled model aircraft is unclear. UAVs may or may not include model aircraft.[citation needed] Some jurisdictions base their definition on size or weight, however, the US Federal Aviation Administration defines any unmanned flying craft as a UAV regardless of size. A radio-controlled aircraft becomes a drone with the addition of an autopilot artificial intelligence (AI), and ceases to be a drone when the AI is removed.
The earliest attempt at a powered UAV was A. M. Low’s “Aerial Target” in 1916. Nikola Tesla described a fleet of unmanned aerial combat vehicles in 1915. Advances followed during and after World War I, including the Hewitt-Sperry Automatic Airplane. The first scaled remote piloted vehicle was developed by film star and model-airplane enthusiast Reginald Denny in 1935. More emerged during World War II – used both to train antiaircraft gunners and to fly attack missions. Nazi Germany produced and used various UAV aircraft during the war. Jet engines entered service after World War II in vehicles such as the Australian GAF Jindivik, and Teledyne Ryan Firebee I of 1951, while companies like Beechcraft offered their Model 1001 for the U.S. Navy in 1955. Nevertheless, they were little more than remote-controlled airplanes until the Vietnam War.

In 1959, the U.S. Air Force, concerned about losing pilots over hostile territory, began planning for the use of unmanned aircraft.Planning intensified after the Soviet Union shot down a U-2 in 1960. Within days, a highly classified UAV program started under the code name of “Red Wagon”. The August 1964 clash in the Tonkin Gulf between naval units of the U.S. and North Vietnamese Navy initiated America’s highly classified UAVs

Diamond Colour |Scale

Diamond Colour |Scale

The GIA grades diamonds on a scale of D (colorless) through Z (light color). All D-Z diamonds are considered white, even though they contain varying degrees of color. True fancy colored diamonds (such as yellows, pinks, and blues) are graded on a separate color scale.

Below is the GIA diamond color chart with definitions:
GIA DIAMOND COLOR SCALE
Colorless
D-F Diamond Color Scale
While there are differences in color between D, E, and F diamonds, they can be detected only by a gemologist in side by side comparisons, and rarely by the untrained eye.

D-F diamonds should only be set in white gold / platinum. Yellow gold reflects color, negating the diamond’s colorless effect.

Near Colorless
G-J Diamond Color Scale
While containing traces of color, G-J diamonds are suitable for a platinum or white gold setting, which would normally betray any hint of color in a diamond.

Because I-J diamonds are more common than the higher grades, they tend to be a great value. An I-J diamond may retail for half the price of a D diamond. Within the G-J range, price tends to increase 10-20% between each diamond grade.

Faint Color
K-M Diamond Color Scale
Beginning with K diamonds, color (usually a yellow tint) is more easily detected by the naked eye.

Set in yellow gold, these warm colored diamonds appeal to some, and are an exceptional value. Others will feel they have too much color. Due to its perceptible color tint, a K diamond is often half the price of a G diamond.

Very Light Color
N-R Diamond Color Scale
Diamonds in the N-R color range have an easily seen yellow or brown tint, but are much less expensive than higher grades.

Light Color
S-Z Diamond Color Scale
For almost all customers, S-Z diamonds have too much color for a white diamond.

Beautiful Cities Around the World

Beautiful Cities Around the World

A city is a large and permanent human settlement. Although there is no agreement on how a city is distinguished from a town in general English language meanings, many cities have a particular administrative, legal, or historical status based on local law.

Cities generally have complex systems for sanitation, utilities, land usage, housing, and transportation. The concentration of development greatly facilitates interaction between people and businesses, sometimes benefiting both parties in the process, but it also presents challenges to managing urban growth.

A big city or metropolis usually has associated suburbs and exurbs. Such cities are usually associated with metropolitan areas and urban areas, creating numerous business commuters traveling to urban centers for employment. Once a city expands far enough to reach another city, this region can be deemed a conurbation or megalopolis. Damascus is arguably the oldest city in the world. In terms of population, the largest city proper is Shanghai, while the fastest-growing is Dubai.


The conventional view holds that cities first formed after the Neolithic revolution. The Neolithic revolution brought agriculture, which made denser human populations possible, thereby supporting city development. The advent of farming encouraged hunter-gatherers to abandon nomadic lifestyles and to settle near others who lived by agricultural production. The increased population density encouraged by farming and the increased output of food per unit of land created conditions that seem more suitable for city-like activities. In his book, Cities and Economic Development, Paul Bairoch takes up this position in his argument that agricultural activity appears necessary before true cities can form.

“What strange phenomena we find in a great city, all we need do is stroll about with our eyes open. Life swarms with innocent monsters.”
― Charles Baudelaire

According to Vere Gordon Childe, for a settlement to qualify as a city, it must have enough surplus of raw materials to support trade and a relatively large population. Bairoch points out that, due to sparse population densities that would have persisted in pre-Neolithic, hunter-gatherer societies, the amount of land that would be required to produce enough food for subsistence and trade for a large population would make it impossible to control the flow of trade. To illustrate this point, Bairoch offers an example: felix_nadar_1820-1910_portraits_charles_baudelaire_2

“Western Europe during the pre-Neolithic, the density must have been less than 0.1 person per square kilometre”. Using this population density as a base for calculation, and allotting 10% of food towards surplus for trade and assuming that city dwellers do no farming, he calculates that “…to maintain a city with a population of 1,000, and without taking the cost of transport into account, an area of 100,000 square kilometres would have been required. When the cost of transport is taken into account, the figure rises to 200,000 square kilometres …”. Bairoch noted that this is roughly the size of Great Britain. The urban theorist Jane Jacobs suggests that city formation preceded the birth of agriculture, but this view is not widely accepted.

In his book City Economics, Brendan O’Flaherty asserts “Cities could persist—as they have for thousands of years—only if their advantages offset the disadvantages. O’Flaherty illustrates two similar attracting advantages known as increasing returns to scale and economies of scale, which are concepts usually associated with businesses. Their applications are seen in more basic economic systems as well. Increasing returns to scale occurs when “doubling all inputs more than doubles the output an activity has economies of scale if doubling output less than doubles cost”. To offer an example of these concepts, O’Flaherty makes use of “one of the oldest reasons why cities were built: military protection” .

Scientific Study of Organisms in The Ocean

Scientific Study of Organisms in The Ocean

Marine biology is the scientific study of organisms in the ocean or other marine bodies of water. Given that in biology many phyla, families and genera have some species that live in the sea and others that live on land, marine biology classifies species based on the environment rather than on taxonomy. Marine biology differs from marine ecology as marine ecology is focused on how organisms interact with each other and the environment, while biology is the study of the organisms themselves.

A large proportion of all life on Earth lives in the ocean. Exactly how large the proportion is unknown, since many ocean species are still to be discovered. The ocean is a complex three-dimensional world covering approximately 71% of the Earth’s surface. The habitats studied in marine biology include everything from the tiny layers of surface water in which organisms and abiotic items may be trapped in surface tension between the ocean and atmosphere, to the depths of the oceanic trenches, sometimes 10,000 meters or more beneath the surface of the ocean.

Specific habitats include coral reefs, kelp forests, seagrass meadows, the surrounds of seamounts and thermal vents, tidepools, muddy, sandy and rocky bottoms, and the open ocean (pelagic) zone, where solid objects are rare and the surface of the water is the only visible boundary. The organisms studied range from microscopic phytoplankton and zooplankton to huge cetaceans (whales) 30 meters (98 feet) in length.

“The water was tripping over itself, splashing and hypnotizing, and I tried to fix my mind on a chunk of it, like each little ripple was a life that began far away in a high mountain source and had traveled miles pushing forward until it arrived at this spot before my eyes, and now without hesitation that water-life was hurling itself over the cliff. I wanted my body in all that swiftness; I wanted to feel the slip and pull of the currents and be dashed and pummeled on the rocks below . . .”
— Justin Torres (We the Animals)

divers-underwater-ocean-swim-68767

Marine life is a vast resource, providing food, medicine, and raw materials, in addition to helping to support recreation and tourism all over the world. At a fundamental level, marine life helps determine the very nature of our planet. Marine organisms contribute significantly to the oxygen cycle, and are involved in the regulation of the Earth’s climate. Shorelines are in part shaped and protected by marine life, and some marine organisms even help create new land.

Many species are economically important to humans, including both finfish and shellfish. It is also becoming understood that the well-being of marine organisms and other organisms are linked in fundamental ways. The human body of knowledge regarding the relationship between life in the sea and important cycles is rapidly growing, with new discoveries being made nearly every day. These cycles include those of matter (such as the carbon cycle) and of air (such as Earth’s respiration, and movement of energy through ecosystems including the ocean). Large areas beneath the ocean surface still remain effectively unexplored.
Early instances of the study of marine biology trace back to Aristotle (384–322 BC) who made several contributions which laid the foundation for many future discoveries and were the first big step in the early exploration period of the ocean and marine life. In 1768, Samuel Gottlieb Gmelin  published the Historia Fucorum, the first work dedicated to marine algae and the first book on marine biology to use the then new binomial nomenclature of Linnaeus. It included elaborate illustrations of seaweed and marine algae on folded leaves.The British naturalist Edward Forbes (1815–1854) is generally regarded as the founder of the science of marine biology.[9] The pace of oceanographic and marine biology studies quickly accelerated during the course of the 19th century.

Light Ultra Powerful Laptop

Light Ultra Powerful Laptop

A laptop, often called a notebook or “notebook computer”, is a small, portable personal computer with a  form factor, an alphanumeric keyboard on the lower part of the “clamshell” and a thin LCD or LED computer screen on the upper portion, which is opened up to use the computer. Laptops are folded shut for transportation, and thus are suitable for mobile use. Although originally there was a distinction between laptops and notebooks, the former being bigger and heavier than the latter, as of 2014, there is often no longer any difference. Laptops are commonly used in a variety of settings, such as at work, in education, and for personal multimedia and home computer use.

A laptop combines the components, inputs, outputs, and capabilities of a desktop computer, including the display screen, small speakers, a keyboard, pointing devices (such as a touchpad or trackpad), a processor, and memory into a single unit. Most 2016-era laptops also have integrated webcams and built-in microphones. Some 2016-era laptops have touchscreens. Laptops can be powered either from an internal battery or by an external power supply from an AC adapter.

Hardware specifications, such as the processor speed and memory capacity, significantly vary between different types, makes, models and price points. Design elements, form factor, and construction can also vary significantly between models depending on intended use. Examples of specialized models of laptops include rugged notebooks for use in construction or military applications, as well as low production cost laptops such as those from the One Laptop per Child organization, which incorporate features like solar charging and semi-flexible components not found on most laptop computers.

In terms of the technology I use the most, it’s probably a tie between my Blackberry and my MacBook Pro laptop. That’s how I communicate with the rest of the world and how I handle all the business I have to handle.

John Legend

Portable computers, which later developed into modern laptops, were originally considered to be a small niche market, mostly for specialized field applications, such as in the military, for accountants, or for traveling sales representatives. As portable computers evolved into the modern laptop, they became widely used for a variety of purposes.
As the personal computer  became feasible in 1971, the idea of a portable personal computer soon followed. A “personal, portable information manipulator” was imagined by Alan Kay at Xerox PARC in 1968, and described in his 1972 paper as the “Dynabook”. The IBM Special Computer APL Machine Portable  was demonstrated in 1973. This prototype was based on the IBM PALM processor. The IBM 5100, the first commercially available portable computer, appeared in September 1975, and was based on the SCAMP prototype.

As 8-bit CPU machines became widely accepted, the number of portables increased rapidly. The Osborne 1, released in 1981, used the Zilog Z80 and weighed 23.6 pounds . It had no battery, a 5 in  CRT screen, and dual 5.25 in  single-density floppy drives. In the same year the first laptop-sized portable computer, the Epson HX-20, was announced. The Epson had an LCD screen, a rechargeable battery, and a calculator-size printer in a 1.6 kg (3.5 lb) chassis.

Both Tandy/RadioShack and HP also produced portable computers of varying designs during this period. The first laptops using the flip form factor appeared in the early 1980s. The Dulmont Magnum was released in Australia in 1981–82, but was not marketed internationally until 1984–85. The US$8,150 GRiD Compass 1101, released in 1982, was used at NASA and by the military, among others. The Gavilan SC, released in 1983, was the first computer described as a “laptop” by its manufacturer.

Best Way to have Fun During Roadtrip

Best Way to have Fun During Roadtrip

The world’s first recorded long distance road trip by automobile took place in Germany in August 1888 when Bertha Benz, the wife of Karl Benz, the inventor of the first patented motor car (the Benz Patent-Motorwagen), travelled from Mannheim to Pforzheim in the third experimental Benz motor car (which had a maximum speed of 10 miles per hour  and back, with her two teenage sons Richard and Eugen, but without the consent and knowledge of her husband.

Her official reason was that she wanted to visit her mother but unofficially she intended to generate publicity for her husband’s invention (which had only been used on short test drives before), which succeeded as the automobile took off greatly afterwards and the Benz’s family business eventually evolved into the present day Mercedes-Benz company.

Presently there is a dedicated signposted scenic route in Baden-Württemberg called the Bertha Benz Memorial Route to commemorate her historic first road trip.

The first successful North American transcontinental trip by automobile took place in 1903 and was piloted by H. Nelson Jackson and Sewall K. Crocker, accompanied by a dog named Bud.[4] The trip was completed using a 1903 Winton Touring Car, dubbed “Vermont” by Jackson. The trip took a total of 63 days between San Francisco and New York, costing US$8,000. The total cost included items such as food, gasoline, lodging, tires, parts, other supplies, and the cost of the Winton. pexels-photo

The first woman to cross the American landscape by car was Alice Ramsey with three female passengers in 1909. Ramsey left from Hell’s Gate in Manhattan, New York and traveled 59 days to San
New highways in the early 1900s helped propel automobile travel in the United States, primarily cross-country travel. Commissioned in 1926, and completely paved near the end of the 1930s, U.S. Route 66 is a living icon of early modern road tripping.

Motorists ventured cross-country for holiday as well as migrating to California and other locations. The modern American road trip began to take shape in the late 1930s and into the 1940s, ushering in an era of a nation on the move.
As a result of this new vacation-by-road style, many businesses began to cater to road-weary travelers. More reliable vehicles and services made long distance road trips easier for families, as the length of time required to cross the continent was reduced from months to days. Within one week, the average family can travel to destinations across North America.

The greatest change to the American road trip was the start, and subsequent expansion, of the Interstate Highway System. The higher speeds and controlled access nature of the Interstate allowed for greater distances to be traveled in less time and with improved safety as highways became divided.

Health Benifits of Yoga

Health Benifits of Yoga

The origins of yoga have been speculated to date back to pre-Vedic Indian traditions, it is mentioned in the Rigveda, but most likely developed around the sixth and fifth centuries BCE, in ancient India’s ascetic and śramaṇa movements. The chronology of earliest texts describing yoga-practices is unclear, varyingly credited to Hindu Upanishads and Buddhist Pāli Canon,[10] probably of third century BCE or later. The Yoga Sutras of Patanjali date from the first half of the 1st millennium CE, but only gained prominence in the West in the 20th century. Hatha yoga texts emerged around the 11th century with origins in tantra.

Yoga gurus from India later introduced yoga to the west, following the success of Swami Vivekananda in the late 19th and early 20th century. In the 1980s, yoga became popular as a system of physical exercise across the Western world. Yoga in Indian traditions, however, is more than physical exercise, it has a meditative and spiritual core. One of the six major orthodox schools of Hinduism is also called Yoga, which has its own epistemology and metaphysics, and is closely related to Hindu Samkhya philosophy.

Many studies have tried to determine the effectiveness of yoga as a complementary intervention for cancer, schizophrenia, asthma, and heart disease. The results of these studies have been mixed and inconclusive, with cancer studies suggesting none to unclear effectiveness, and others suggesting yoga may reduce risk factors and aid in a patient’s psychological healing process.

I am doing everything to be fit – like not eating oily food, doing yoga, gymming and consulting my doc.

Suresh Raina

In Vedic Sanskrit, yoga (from the root yuj) means “to add”, “to join”, “to unite”, or “to attach” in its most common literal sense. By figurative extension from the yoking or harnessing of oxen or horses, the word took on broader meanings such as “employment, use, application, performance” (compare the figurative uses of “to harness” as in “to put something to some use”). All further developments of the sense of this word are post-Vedic. More prosaic moods such as are also found in Indian epic poetry.

pexels-photo-2

There are very many compound words containing yoga in Sanskrit. Yoga can take on meanings such as . In simpler words, Yoga also means “combined”. For example, guṇáyoga means “contact with a cord”; chakráyoga has a medical sense of ; chandráyoga has the astronomical sense of “conjunction of the moon with a constellation”; puṃyoga is a grammatical term expressing “connection or relation with a man”, etc. Thus, bhaktiyoga means “devoted attachment” in the monotheistic Bhakti movement.

The term kriyāyoga has a grammatical sense, meaning “connection with a verb”. But the same compound is also given a technical meaning in the Yoga Sutras , designating the “practical” aspects of the philosophy, i.e. the “union with the supreme” due to performance of duties in everyday life

www.000webhost.com