Wednesday, March 25, 2015

My Mousetrap Car

For me, this was an obnoxiously stressful experience to say the least.

When this project was assigned, I knew that it would be difficult to find the time and the effort to make the absolute best mousetrap car that my capabilities would allow, but to my fault, I overestimated how much time I would have to do so.

With tech week for the school musical entirely overlapping with this project, not to mention all of the other homework I had due that week, the amount of free time I had was cut down to-- well, nothing.

But this did not stop me.

I knew there was absolutely no way I could complete all of my homework on time, there was no way.  And so, I talked to my teachers to get extensions so I could then focus on this seemingly difficult task of building a mousetrap car.


This entire process was one of trial and error.

I tried several different designs to create the most aerodynamic and functional car that I could.  Ultimately, the following design was the most valuable and praiseworthy.






The body of the car is made of cedar wood and wooden dowels.  I attached the mousetrap with tongue depressors and electrical tape.  The wheels are wooden; the front wheels have rubber bands on them for traction.  The lever arm attached to the mousetrap is another dowel to increase the length of the string I could tie to the front axle that would turn the wheels.

When I tested the car for the first time, it went all the way down a hallway in my house that measures about five yards in a matter of 6.4 seconds.

I tested the car a second time with a slightly better time: 6.2 seconds.

Knowing the mousetrap would lose more and more power every time I tested it, I only tested it these two times.

Update: my car failed when it actually counted.

Tuesday, March 3, 2015

Robotics

Robotics is a term that most people don't understand.  Perhaps there is a little boy that thinks robots are nothing more than toys, or maybe a little girl who believes robots are boring, ridiculous, and quite frankly, useless; however, that is not the case.  Robots are actually memorizing machines.  They have more power than the human race, and will most likely dominate the planet at the exponential growth of their capabilities.



What are Robotics?

Robotics is a branch of mechanical engineering, electrical engineering, and computer science.  This branch deal with the design, construction, operation, and application of robots.  The purpose of robotics is to create a machine that can replace human beings in dangerous situations or manufacturing processes.  Many robots do jobs that are threatening to a human's safety such as defusing bombs, mines or exploring shipwrecks.

History of Robotics

Robots made their first appearance in 1495; Leonardo Da Vinci designed the first humanoid robot.  This robot took the form of an armored knight.  It was created to prove that the human body could be imitated.  Unfortunately, this robot was lost or destroyed several years ago.

The capabilities of this robot include walking, standing, opening and closing its mouth, and raising its arms.  It could also move its head from side to side, and is said to have a jaw that is anatomically correct.



The first real android was constructed in 1525 by Hans Bullmann in Germany.  This man was said to have created many other androids.  His contemporary, Gianello Torriano created the Lute Player Lady.  The android stands at 44 centimeters, and though it does not actually work anymore, it is said to have been able to walk in a straight line or in a circle and strum the lute with its right hand.  It was also capable of rotating its head.  It is currently in the Kunsthistorisches Museum in Vienna.



It is believed that in 1533, Regiomontanus created an iron fly and an artificial eagle, both of which could fly, supposedly with steam pressure.  Also, in 1543, John Dee created a wooden beetle that could fly.

In 1564, Pare Ambroise completes a design for a mechanical hand.  It was made from the real thing and  reinforced with mechanical muscles.


In the 18th century, Japan debuts puppets with mechanisms inside of them that allow them to move by themselves.  Around the same time, mechanical dolls appeared in Europe.


1725 was when the first mechanical theater was constructed in Germany.  It featured 119 animated figures that could perform small plays.


In 1737, a few robotic beings were invented by Jacques Vaucanson, the first being a human-sized, flute-playing android.  It could play 12 songs.  This was followed by his second creation which could play a flute and a drum or tambourine.  Finally, his third and most famous creation, made in 1738, was a duck that could walk, quack, flap its wings, and even eat and digest food.


Friedrich von Knauss, a German inventor, created an android that could hold a pen and write a segment up to 107 words in 1760.  Two years later, Pierre Jacquet-Droz began to create androids that were modeled after writers, artists, and musicians.



In 1801, Joseph Jacquard built an automated loom controlled by punch cards, which are later used in modern computer.

In 1810, The Mechanical Trumpeter was created by Friedrich Kaufmann.  This is an example of early programming: notches mounted on a drum inside the robot activate valves that let air pass through.  This creates a modulated sound to resemble that of a trumpet.



Thomas Edison begins producing a talking doll in 1889, and Nikola Tesla constructs a remote-controlled boat in 1898.



In 1954, George Devol and Joe Engelberger designed the first programmable robot arm.


1956- Squee The Squirrel; a robotic squirrel.  It has 4 sense organs (2 photo-tubes, 2 contact switches), and 3 action organs (a drive motor, a steering motor, and a motor that opens and closes the scoop or "hands").  It also has a brain of half a dozen relays.  The robot is designed to find a "nut" and bring it back to its "nest."  A network of lights is used to complete this task.  Squee was able to complete this task rather well, but not professionally; 75% reliability.


Also in this year, George Devol and Joseph Engelberger formed the first robot company, called Unimation, Inc.  To add on, the term "artificial intelligence" is used.

MacHack, a program that plays chess, was written by Richard Greenblatt in 1967.

1969- Victor Scheinman creates the Standard Arm.  The design of the arm is still affecting the development of modern robot arms.  This arm was revolutionary, being the first successful electrically-powered, computer-controlled robot arm.


Shakey is introduced as the first mobile robot controlled by artificial intelligence in 1970.  It is produced by SRI International.  Shakey uses a TV camera, a laser range finder, and bump sensors to collect data.



Wabot-1 is built in Japan in 1973.  This is the first full-scale anthropomorphic robot built in the entire world.  It includes a limb control system, a vision system, and a conversation system.  It is able to communicate with a person in Japanese and navigate distances and directions using external receptors, artificial ears and eyes, and an artificial mouth.


1993- Dante; 8-legged walking robot.  Its mission is to collect data from harsh environments similar to what might be found on a different planet.  After a 20 foot decent, Dante's tether snaps and drops the robot into a crater; the mission fails.


Seiko Epson creates a micro-robot called Monsieur.  It is certified as the smallest micro-robot by Guinness Book of World Records.


A RoboTuna is developed by David Barrett to study the way fish swim in 1996.  The RoboTuna is not a free-swimming fish.  It will take another few years to perfect the project, a task taken on by John Kumph in 2004.


The Gastrobot is a robot that digests organic mass to produce carbon dioxide which is then used to exhort power.  This creation by Chris Campbell and Dr. Stuart Wilkinson is given the nickname: "Chew Chew," but it is formally known as the "flatulence engine."


The first node of the International Space Station is set into orbit in 1997.  The Pathfinder Mission lands on Mars in July.  In 1998, NASA launches the Deep Space 1 autonomous spacecraft to test technologies for future missions that will be conducted solely by robots.

Sony manufactures one of the first robots meant for consumer markets in May of 1999.  Aibo, K9 reacts to sounds and can be programmed.  It was sold out after 20 minutes.


ASIMO; a humanoid robot, developed by Honda in 2000.




The second generation of Aibo is released in 2001.



The Space Station Remote Manipulator System is successfully launched and begins operations to complete the assembly of the International Space Station in 2001.

In August, the FDA develops the CyberKnife to treat tumors anywhere in the body.

Honda's ASIMO is the first robot that can walk independently with relatively smooth movements - 2002

Sony releases the third generation on Aibo, AIBO ERS-7 in 2003.


Epson releases the smallest robot ever in 2004.  Weighing .35 ounces (10 grams) and measuring 2.8 inches (70 millimeters) in height, the Micro Flying Robot is introduced as the smallest and most lightweight robot helicopter.  Its mission is to perform as a flying camera during natural disasters.  Though this prototype can hardly fly higher than a few meters off the ground, it is seen as a mere shadow of what the company is capable of.



In 2005, researchers at Cornell claim to have created the first self replicating robot.  It is an array of computerized cubes linked by magnets.  These magnets allow the cubes to attach and detach from each other in a process that eventually seems to replicate the tower.

Watch the video:






Robots in Space

Robots are ideal for tedious or dangerous tasks because unlike humans, they don't get tired, they can endure harsh conditions, they can work without oxygen, they don't get bored with repetition, and they cannot get distracted.  Therefore, robots are especially useful in space exploration.

There are two machines that are considered space robots; one is ROV (Remotely Operated Vehicle) and the other is RMS (Remote Manipulator System).  

The ROV is most typically used in nuclear facilities in missions that are too dangerous fro humans to participate in.  An ROV may be an unmanned spacecraft that can remain in flight, a lander that makes contact with an extraterrestrial body and the operates in a stationary position, or a rover that can explore an environment once it has landed. 

The RMS is more commonly used for industry and manufacturing.  It is crane-like and imitates the human arm in many ways; it has side-to-side movement, up-and-down movement, and full 360-degree movement in the wrist which humans do not have.  The RMS has performed several tasks for NASA including: positioning and anchoring devices for astronauts working in outer space.

Robots have been trusted with unmanned missions in outer space.  For example, in the years of 1966-1968, a series of Surveyor spacecraft were sent to the lunar surface to send images back to Earth to be analyzed so the Apollo Moon missions could be planned.  Also, in 1970, the Soviet Lunokhod 1 lunar rover examined an extraterrestrial body while being remotely controlled by Soviet scientists through television viewers.  It was able to sense when it was about to tip over; when it did sense this, it would immediately stop and wait for assistance from the scientists back on Earth.  To add on, the Voyager 2 is an excellent example of how unmanned missions can greatly increase humans' understanding of the universe.  This mission, launched in 1977, allowed scientists to explore Saturn, Neptune, Uranus, and Jupiter without actually voyaging into the dangerous conditions.  At the rate that the Voyager 2 is going, it will very likely reach the border of the solar system and continue to provide thought-provoking information about the areas in outer space that are unreachable to human beings.

 

In addition to this, robots have also served as assistance in manned missions.  The RMS is the only robot to be used in manned space missions such as the Space Shuttle Mission STS-41C.  One of the goals of the mission was to isolate a malfunctioning Solar Maximum Mission Satellite (Solar Max) and to then repair it so they could then set it back into orbit.

For the future, NASA plans on focusing on three main uses of manipulation in space.  This includes: servicers, cranes, and rovers.  Servicers refers to human-like, multi-armed manipulators used for servicing and assembly.  Cranes, like the RMS, are repositioning systems.  And rovers are mobile platforms for transporting payloads.  NASA plans to deal with the lacking versatility and consolidate the size of their manipulators for future missions in outer space.  Their goal is develop a series of telerobotics where teleoperation and robots are combined.  The question of the future of robots in space is not one of human versus machine, but one of how the capabilities of both species can elicit the greatest features out of one another.






Sources:









Wednesday, December 10, 2014

The Technological Singularity



The technological singularity speculation is that quickening advancement in innovations will result in a runaway impact wherein computerized reasoning will surpass human intelligent limit and control, hence profoundly changing or actually finishing civilization in an occasion called the singularity.  On the grounds that the abilities of such an insights may be difficult to understand, the innovative peculiarity is an event past which occasions are eccentric or even unfathomable.

The main utilization of the expression "peculiarity" in this setting was by mathematician John von Neumann. In 1958, in regards to a synopsis of a discussion with von Neumann, Stanislaw Ulam depicted "always quickening advancement of engineering and changes in the mode of human life, which gives the presence of approaching some vital peculiarity in the historical backdrop of the race past which human issues, as we know them, couldn't continue". The term was promoted by sci-fi author Vernor Vinge, who contends that computerized reasoning, human natural improvement, or brain–computer interfaces could be conceivable reasons for the singularity. Futurist Beam Kurzweil refered to von Neumann's utilization of the term in a foreword to von Neumann's excellent The Machine and the Mind.

Defenders of the peculiarity regularly propose an "insights explosion", where superintelligences plan progressive eras of progressively influential personalities, that may happen rapidly and may not stop until the operators' cognitive capacities extraordinarily surpass that of any human.

Kurzweil predicts the peculiarity to happen around 2045 though Vinge predicts eventually before 2030. At the 2012 Peculiarity Summit, Stuart Armstrong did an investigation of manufactured general insights (AGI) expectations by specialists and discovered an extensive variety of anticipated dates, with an average estimation of 2040. Examining the level of instability in AGI gauges, Armstrong said in 2012, "It's not completely formalized, yet my current 80% assessment is something like five to 100 years."



Essential concepts

A considerable lot of the most perceived essayists on the peculiarity, for example, Vernor Vinge and Beam Kurzweil, characterize the idea regarding the mechanical formation of superintelligence. They contend that it is troublesome or incomprehensible for present-day people to anticipate what individuals' lives will be similar to in a post-peculiarity world.  The propositions and potential abilities of superintelligent elements are yet obscure. The expression "innovative peculiarity" was initially begat by Vinge, who made a similarity between the breakdown in our capacity to foresee what would happen after the improvement of superintelligence and the breakdown of the prescient capacity of cutting edge physical science at the space-time peculiarity past the occasion skyline of a dark hole.

A few scholars utilize "the peculiarity" in a more extensive manner to allude to any radical changes in our general public achieved by new innovations, for example, sub-atomic nanotechnology, in spite of the fact that Vinge and other noticeable authors particularly express that without superintelligence, such changes would not qualify as a genuine singularity. Numerous essayists additionally attach the peculiarity to perceptions of exponential development in different advances (with Moore's Law being the most conspicuous sample), utilizing such perceptions as a premise for foreseeing that the peculiarity is liable to happen at some point inside the 21st century.

A mechanical peculiarity incorporates the idea of an insights blast, a term begat in 1965 by I. J. Good. Albeit mechanical advancement has been quickening, it has been constrained by the essential knowledge of the human mind, which has not, as indicated by Paul R. Ehrlich, changed fundamentally for millennia. On the other hand, with the expanding force of machines and different innovations, it may in the long run be conceivable to manufacture a machine that is more astute than humanity. If a superhuman sagacity were to be imagined either through the enhancement of human discernment or through manmade brainpower it would bring to shoulder more prominent critical thinking and imaginative abilities than current people are prepared to do. It could then outline a much more skilled machine, or re-compose its own particular programming to end up considerably more smart. This more fit machine could then go ahead to outline a machine of yet more prominent ability. These cycles of recursive change toward oneself could quicken, conceivably permitting tremendous subjective change before any furthest points of confinement forced by the laws of material science or hypothetical reckoning set in.

The exponential development in processing engineering proposed by Moore's Law is ordinarily refered to as motivation to expect a peculiarity in the moderately not so distant future, and various creators have proposed speculations of Moore's Law. Machine researcher and futurist Hans Moravec proposed in a 1998 book that the exponential development bend could be reached out once more through prior figuring advances preceding the incorporated circuit. Futurist Beam Kurzweil hypothesizes a law of quickening returns in which the rate of innovative change (and all the more for the most part, all evolutionary processes) builds exponentially, summing up Moore's Law in the same way as Moravec's proposal, furthermore including material innovation (particularly as connected to nanotechnology), therapeutic engineering and others.  Somewhere around 1986 and 2007, machines' application-particular ability to process data every capita has generally multiplied like clockwork; the every capita limit of the world's broadly useful machines has multiplied at regular intervals; the worldwide telecom limit every capita multiplied at regular intervals; and the world's capacity limit every capita multiplied each 40 months. Like different writers, however, Kurzweil holds the expression "peculiarity" for a quick increment in discernment (rather than different advances), composing for instance that "The Peculiarity will permit us to rise above these restrictions of our organic bodies and brains ... There will be no qualification, post-Peculiarity, in the middle of human and machine". He accepts that the "outline of the human cerebrum, while not basic, is in any case a billion times more straightforward than it shows up, because of enormous redundancy". As indicated by Kurzweil, the motivation behind why the mind has an untidy and unusual quality is on account of the cerebrum, in the same way as most natural frameworks, is a "probabilistic fractal". He additionally characterizes his anticipated date of the peculiarity (2045) regarding when he expects machine based intelligences to altogether surpass the aggregate of human intellectual prowess, written work that advances in registering before that date "won't speak to the Peculiarity" in light of the fact that they do "not yet compare to a significant development of our intelligence."

The expression "mechanical peculiarity" reflects the thought that such change may happen all of a sudden, and that it is hard to foresee how the ensuing new world would operate. It is hazy whether a sagacity blast of this kind would be advantageous or destructive, or even an existential threat, as the issue has not been managed by most fake general insights analysts, in spite of the fact that the point of agreeable manmade brainpower is researched by the Fate of Mankind Organization and the Peculiarity Establishment for Counterfeit consciousness, which is currently the Machine Knowledge Research Institute.

Gary Marcus guarantees that "essentially everybody in the A.i. field accepts" that machines will one day surpass people and "at some level, the main genuine distinction in the middle of fans and doubters is a period frame." In any case, numerous unmistakable technologists and scholastics debate the credibility of a mechanical peculiarity, including Jeff Hawkins, John Holland, Jaron Lanier, and Gordon Moore, whose Moore's Law is frequently refered to in backing of the idea.



Sagacity explosion

The thought of an "insights blast" was initially portrayed hence by Great (1965), who guessed on the impacts of superhuman machines:

"Let a ultraintelligent machine be characterized as a machine that can far surpass all the intelligent exercises of any man however sharp. Since the configuration of machines is one of these intelligent exercises, a ultraintelligent machine could plan far superior machines; there would then certainly be a 'sagacity blast,' and the insights of man would be abandoned far. In this manner the first ultraintelligent machine is the last innovation that man require ever make, gave that the machine is sufficiently mild to let us know how to hold it under control."

Most proposed techniques for making superhuman or transhuman psyches can be categorized as one of two classifications: insights enhancement of human brains and counterfeit consciousness. The methods estimated to create brainpower enlargement are various, and incorporate bioengineering, hereditary designing, nootropic medications, AI collaborators, immediate mind machine interfaces and psyche transferring. The presence of various ways to a brainpower blast makes a peculiarity more probable; for a peculiarity to not happen they would all need to fail.

Hanson (1998) is suspicious of human sagacity growth, written work that once one has depleted the "low-hanging products of the soil" of simple strategies for expanding human knowledge, further upgrades will get to be progressively hard to discover. In spite of the various conjectured means for enhancing human knowledge, non-human computerized reasoning (particularly seed AI) is the most well known choice for associations attempting to development the singularity.

Whether a knowledge blast happens relies on upon three factors. The in the first place, quickening element, is the new insights upgrades made conceivable by every past change. Contrariwise, as the intelligences get to be more developed, further advances will get to be more confounded, potentially defeating the point of interest of expanded discernment. Every change must have the capacity to generate no less than one more change, as a rule, for the peculiarity to proceed. At long last the laws of physical science will in the end keep any further upgrades.

There are two legitimately free, yet commonly strengthening, quickening impacts: increments in the velocity of reckoning, and upgrades to the calculations used. The previous is anticipated by Moore's Law and the conjecture enhancements in hardware, and is similarly like past mechanical development. Then again, most AI analysts accept that product is more vital than equipment.

Existential risk

Berglas (2008) notes that there is no immediate evolutionary inspiration for an AI to be amicable to people. Advancement has no inalienable propensity to create results esteemed by people, and there is little motivation to expect a discretionary streamlining methodology to advance a conclusion fancied by humanity, as opposed to coincidentally prompting an AI acting in a manner not expected by its inventors, (for example, Scratch Bostrom's unpredictable illustration of an AI which was initially customized with the objective of assembling paper cuts, so that when it accomplishes superintelligence it chooses to change over the whole planet into a paper cut assembling facility). Anders Sandberg has likewise expounded on this situation, tending to different basic counter-arguments. AI analyst Hugo de Garis recommends that computerized reasonings might essentially kill mankind for access to rare resources, and people would be weak to stop them. On the other hand, Ais created under evolutionary weight to advance their own particular survival could outcompete humanity.

Bostrom (2002) talks about human termination situations, and records superintelligence as a conceivable reason:

"When we make the first superintelligent element, we may commit an error and provide for it objectives that lead it to destroy humanity, expecting its huge scholarly playing point provides for it the ability to do so. Case in point, we could erroneously raise a subgoal to the status of a supergoal. We let it know to tackle a scientific issue, and it agrees by transforming all the matter in the earth's planetary group into a monster computing gadget, simultaneously slaughtering the individual who posed the question."

A huge issue is that hostile computerized reasoning is prone to be much simpler to make than amicable AI. While both oblige extensive advances in recursive improvement procedure plan, agreeable AI additionally requires the capacity to make objective structures invariant under change toward oneself (or the AI could change itself into something antagonistic) and an objective structure that adjusts to human values and does not consequently obliterate humanity. A hostile AI, then again, can enhance for a self-assertive objective structure, which does not have to be invariant under self-modification.

Eliezer Yudkowsky suggested that exploration be attempted to create amicable manmade brainpower to address the dangers. He noted that the first genuine AI would have a head begin on change toward oneself and, if neighborly, could keep antagonistic Ais from creating, and in addition giving tremendous profits to mankind.

Hibbard (2014) proposes an AI plan that dodges a few dangers including delusion, toward oneself unintended instrumental actions, and debasement of the prize generator. He additionally talks about social effects of Ai and testing Ai. His 2001 book Super-Savvy Machines proposed a basic outline that was helpless against some of these dangers.

One speculative methodology towards endeavoring to control a counterfeit consciousness is an AI box, where the computerized reasoning is kept compelled inside a mimicked world and not permitted to influence the outside world. Then again, a sufficiently canny AI might just have the capacity to escape by outflanking its less shrewd human captors.









Monday, December 8, 2014

Programming and Coding

Machine programming (regularly abbreviated to writing computer programs) is a process that leads from an unique definition of a figuring issue to executable machine programs. Programming includes exercises, for example, investigation, creating understanding, producing calculations, check of necessities of calculations including their accuracy and asset utilization, and usage (normally alluded to as coding) of calculations in a target programming dialect. Source code is composed in one or additionally programming dialects, (for example, C, C++, C#, Java, Python, Smalltalk, Javascript, and so on.). The reason for writing computer programs is to discover a grouping of guidelines that will mechanize performing a particular errand or explaining a given problem.  The methodology of programming along these lines regularly obliges ability in numerous diverse subjects, including learning of the application area, specific calculations and formal rationale.

Related assignments incorporate testing, debugging, and keeping up the source code, usage of the fabricate framework, and administration of determined antiques, for example, machine code of machine projects. These may be viewed as a major aspect of the programming methodology, however regularly the expression "programming improvement" is utilized for this bigger procedure with the expression "programming", "usage", or "coding" held for the real composition of source code. Programming building consolidates designing systems with programming advancement rehearses.

Quality requirements

Whatever the methodology to advancement may be, the last program must fulfill some essential properties. The accompanying properties are among the most important:

Dependability: how regularly the aftereffects of a system are right. This relies on upon reasonable rightness of calculations, and minimization of programming oversights, for example, botches in asset administration (e.g., cushion floods and race conditions) and rationale lapses, (for example, division by zero or off-by-one blunders).

Power: how well a project envisions issues because of lapses (not bugs). This incorporates circumstances, for example, mistaken, wrong or degenerate information, inaccessibility of required assets, for example, memory, working framework administrations and system associations, client blunder, and unforeseen force blackouts.

Ease of use: the ergonomics of a program: the straightforwardness with which an individual can utilize the project for its proposed reason or at times even unanticipated purposes. Such issues can represent the moment of truth its prosperity even paying little respect to different issues. This includes an extensive variety of text based, graphical and now and again equipment components that enhance the clarity, instinct, cohesiveness and fulfillment of a program's client interface.

Transportable: the scope of machine fittings and working framework stages on which the source code of a project can be accumulated/translated and run. This relies on upon contrasts in the programming offices gave by the distinctive stages, including fittings and working framework assets, expected conduct of the equipment and working framework, and accessibility of stage particular compilers (and some of the time libraries) for the dialect of the source code.

Viability: the straightforwardness with which a project can be changed by its available or future engineers to make changes or customization, fix bugs and security openings, or adjust it to new situations. Great works on amid starting advancement have the effect in this respect. This quality may not be specifically obvious to the end client however it can fundamentally influence the destiny of a system over the long haul.

Proficiency/execution: the measure of framework assets a system expends (processor time, memory space, moderate gadgets, for example, circles, system data transmission and to some degree much client cooperation): the less, the better. This additionally incorporates watchful administration of assets, for instance cleaning up impermanent records and killing memory spills.

Comprehensibility of source code

In machine programming, comprehensibility alludes to the simplicity with which a human peruser can fathom the reason, control stream, and operation of source code. It influences the parts of value above, including movability, convenience and in particular viability.

Meaningfulness is essential in light of the fact that developers invest the dominant part of their time perusing, attempting to comprehend and altering existing source code, as opposed to composing new source code. Unintelligible code frequently prompts bugs, inefficiencies, and copied code. A study found that a couple of basic meaningfulness changes made code shorter and radically diminished the time to comprehend it.

Taking after a reliable programming style regularly helps decipherability. Nonetheless, clarity is more than simply programming style. Numerous elements, having little or nothing to do with the capacity of the machine to effectively incorporate and execute the code, help readability. Some of these variables include:

Diverse space styles (whitespace)

Remarks

Deterioration

Naming traditions for articles, (for example, variables, classes, techniques, and so forth.)

Different visual programming dialects have likewise been created with the purpose to determination meaningfulness concerns by receiving non-conventional methodologies to code structure and showcase.

Algorithmic complexity

The scholarly field and the designing practice of machine writing computer programs are both to a great extent concerned with finding and actualizing the most productive calculations for a given class of issue. For this reason, calculations are ordered into requests utilizing supposed Huge O documentation, which communicates asset use, for example, execution time or memory utilization, as far as the extent of an info. Master software engineers are acquainted with a mixture of settled calculations and their particular complexities and utilize this learning to pick calculations that are best suited to the circumstances.

Methodologies

The initial phase in most formal programming improvement techniques is prerequisites examination, took after by testing to focus quality displaying, execution, and disappointment end (debugging). There exist a ton of varying methodologies for each of those assignments. One methodology well known for necessities examination is Utilization Case investigation. Numerous software engineers use types of Coordinated programming advancement where the different phases of formal programming improvement are more incorporated together into short cycles that take a couple of weeks as opposed to years. There are numerous methodologies to the Product advancement process.

Famous demonstrating methods incorporate Item Arranged Examination and Configuration (OOAD) and Model-Driven Building design (MDA). The Bound together Demonstrating Dialect (UML) is a documentation utilized for both the OOAD and MDA.

A comparative procedure utilized for database configuration is Substance Relationship Demonstrating (ER Displaying).

Usage procedures incorporate basic dialects (item situated or procedural), practical dialects, and rationale dialects.

Measuring dialect usage

It is extremely hard to figure out what are the most prevalent of advanced programming dialects. A few dialects are extremely famous for specific sorts of uses (e.g., COBOL is still solid in the corporate information center, frequently on expansive centralized computers, FORTRAN in building applications, scripting dialects in Web improvement, and C in installed applications), while a few dialects are routinely used to compose numerous various types of uses. Likewise numerous applications utilize a mix of a few dialects in their development and utilization. New dialects are for the most part planned around the linguistic structure of a past dialect with new usefulness included (for instance C++ includes object-orientedness to C, and Java adds memory administration and bytecode to C++, and as a result loses productivity and the capacity for low-level control).

Strategies for measuring programming dialect fame include: checking the quantity of employment notices that specify the language, the quantity of books sold and courses showing the dialect (this overestimates the significance of more current dialects), and appraisals of the quantity of existing lines of code written in the dialect (this disparages the quantity of clients of business dialects, for example, COBOL).

Debugging

Debugging is a paramount errand in the product advancement process following having surrenders in a system can have critical outcomes for its clients. A few dialects are more inclined to a few sorts of issues on the grounds that their determination does not oblige compilers to execute as much checking as different dialects. Utilization of a static code examination instrument can help identify some conceivable issues.

Debugging is frequently finished with Ides like Shroud, Kdevelop, Netbeans, Code::blocks, and Visual Studio. Standalone debuggers like gdb are additionally utilized, and these regularly give to a lesser extent a visual environment, normally utilizing a summon line.

Programming languages

Diverse programming dialects help distinctive styles of programming (called programming ideal models). The decision of dialect utilized is liable to numerous contemplations, for example, organization approach, suitability to errand, accessibility of outsider bundles, or individual inclination. Preferably, the programming dialect best suited for the current workload will be chosen. Exchange offs from this perfect include discovering enough developers who know the dialect to manufacture a group, the accessibility of compilers for that dialect, and the productivity with which programs written in a given dialect execute. Dialects structure a surmised range from "low-level" to "abnormal state"; "low-level" dialects are commonly more machine-situated and speedier to execute, while "abnormal state" dialects are more unique and less demanding to utilize yet execute less rapidly. It is typically less demanding to code in "abnormal state" dialects than in "low-level" ones.

Allen Downey, in his book How To Have a similar outlook as A Machine Researcher, composes:

The subtle elements look changed in changed dialects, yet a couple of essential directions show up in pretty much every dialect:

Info: Accumulate information from the console, a document, or some other gadget.

Yield: Show information on the screen or send information to a document or other gadget.

Number-crunching: Perform fundamental arithmetical operations like expansion and augmentation.

Contingent Execution: Check for specific conditions and execute the suitable grouping of proclamations.

Reiteration: Perform some activity over and over, normally with some variety.

Numerous codes give an instrument to call capacities gave by imparted libraries. Given the capacities in a library take after the proper run time traditions (e.g., system for passing contentions), then these capacities may be composed in some other dialect.

Programmers

Machine software engineers are the individuals who compose machine programming. Their occupations normally include:

Coding

Debugging

Documentation

Coordination

Support

Prerequisites examination

Programming structural engineering

Programming testing

Detail

Thursday, October 2, 2014

My Experience With the Catapult

The laws of Physics are upon us.

My experience with my catapult was actually quite hectic.  I ran into many dead ends in the process.

My first model was very simple. It was nothing more than a plastic spoon taped to a mousetrap.  To my surprise, it actually worked quite well, but I knew it could be better.  So I adjusted the angle at which my projectile would be shot.  This was a frustrating stage for me because whenever I got my projectile to travel far, I would tape my catapult in place.  However, when I tested the taped catapult, the projectile either went at too steep of an angle or an angle that's not steep enough.  Eventually, the model that ended up working the best was a flat, elevated base.  This was surprising to me because I initially expected the best angle to be 45 degrees, but ultimately, that wasn't what worked best for me.


History of Catapults

All things in existence have a story, and every story has a beginning.  Though many attempt to forge the legendary start of a revolutionary story for their own entertainment, the truth that lies within the history of catapults is quite entertaining without any revision.
Catapults were originally created as warfare in the Middle Ages.  As the enemy's castle walls of defense grew taller and stronger, a tool was needed to shoot hostile projectiles above or through the barricades.  A device was invented and named catapult, a term derived from the Greek word "katapultos."  A catapult was a large machine on wheels with a basket attached to a long arm and a source of power for hurling objects.  The very first catapult ever was invented around 400 BC in Greece, and it was quite different than the catapults that we see and use today.  In fact, it was more similar to a crossbow, both in the way it looked and functioned.  It was called the Gastraphete.


The Greeks were so pleased by the amount of damage that the Gastraphete caused that they assembled a bigger version of it.  This was called a Ballista.


And finally, the trebuchet.  A trebuchet is a form of a catapult, but it's dependent on a different power source: gravity.


In conclusion, catapults have clearly advanced with time, using one of four power sources: tension, torsion, traction, and gravity.  Today, catapults vary in size, from a tiny toy catapult for small children to enormous weaponry catapults.  Luckily, if it weren't for the ancient Greeks, we wouldn't have a device that has so many uses and purposes, whether it's being used to hurl flaming projectiles in battle, or being used for a science experiment, or simply being used exclusively for fun.




Wednesday, September 3, 2014

What I Want From This Class

This year, I am taking Physics as a part of my freshman curriculum, and as our first assignment, my class was told to start a blog, the first post describing what we want to extract from this class.  Surely, what should be crossing my mind should be along the lines of the theory of relativity, or some highly scientific word that's too long to type without looking it up in a textbook first, but honestly, what I want to get out of this class is everything.  What is there that doesn't have to do with science?  Of course, this class will be more focused on the laws of physics, but simply wondering about these things that seem to have very little to do with any class material, is like being given the key to unlock any door you wish.  If you have a goal, that is what you'll pursue, and every minute that you commit to the achievement of your goal, is just slowly turning that key in the lock.  My goal is to unlock this door that stands before me and use what's behind it to unlock even more doors, and that is what I hope to see happen this year in physics class.  I want to be able to apply class material to everyday life, since science holds the true realities of life.  I want this class to open my mind to new ideas, and possibly better my well-being with these ideas so I can continue to unlock doors throughout the duration of my life.