Whenever you submit a form to download a file the response sent back is a file. In this case the setter called after the method which does the download has no effect.

 if(form.getDownloadForm != null && form.getDownloadForm.equals("Y"))
 {
   downloadForm(form, request);

   //The Below Setter has No Effect
   form.setDownloadForm("Y");
 }
 class RegisterForm()
 {
   public String downloadForm;

    public void setDownloadForm(String p_download) 
    {
	  this.downloadForm= p_download;
    }
    
    public String getDownloadForm() 
    {
	  return downloadForm;
    }
 } 

The Setter had no effect since the request sent to the downloadForm(form, request); will end up with response of file download and not response which has new values to form set by setter(in our case downloadForm(form, request)) and Page reload.

There are 2 simple fix for this.

Fix 1
One simple fix is to set form hidden property in java script to N after the form get submitted. Note though the form is submitted it is not reloaded with new response since the response sent back is a file.

The Page will be reloaded only when response is sent back to the same URL of browser. Not to the browser as file.

 function downloadWFSetup(pWaterfallId) 
 {	
    $('#downloadForm').val('Y');
    document.WaterfallTriggerForm.submit();

    //This part of code runs even after the form submission since the 
    //response sent is file which does not require page reload
    $('#downloadForm').val('N');			
 }

In our case the page is not going to get reloaded so the java script continues its execution even after form submission setting property of downloadForm to N.

Fix 2
The Other way is to send request in Link with downloadForm=Y. In this case there is no need to to reset the form values as we get the values by request.getParameter() Method.

சிந்தனை 1

‘செல்லாத காசுக்குள்ளும் செப்பு இருக்கு’னு சொல்வாங்க. என் வாழ்க்கையில் நான் கத்துக்கிட்ட ஒரே விஷயம் இதுதாங்க. அடுத்த மனுஷனை மதிக்கணும். அவரையும் நம்மோட ஒருத்தரா நேசிக்கணும். இங்க பெரியவங்க, சின்னவங்கனு யாரும் இல்லை. எல்லாப் பெரியவங்களுக்கும் ஒரு ‘நேத்து’ இருக்கும். அதுபோல சின்னவங்களுக்கும் ஒரு ‘நாளை’ இருக்கும். எனக்கு டீ தர்றான் ஒரு பையன். அது அவர் வேலை. அவர் எங்க வீட்டுக்கு வந்தா நான்தான் அவருக்கு டீ எடுத்துட்டு வந்து தருவேன். தரணும்.

என்னைப் பொறுத்தவரைக்கும் மனுஷங்க மகான்களாவறது பெரிய விஷயம் இல்லை; மனுஷங்களாவே இருக்க முடிஞ்சா… அதுதாங்க சந்தோஷம்!”

-தேவா

சிந்தனை 2

வாழ்வதற்கான செலவு மிகவும் குறைவு…
அடுத்தவன் போல் வாழ்வதற்குத் தான் செலவு அதிகம்…!!

Table Function return a result set that mimics what we would normally expect from a traditional SQL SELECT statement

Table functions are a new feature in Oracle9i that allow you to define a set of PL/SQL statements that will, when queried, behave just as a regular query to table would. The added benefit to having a table function is that you can perform transformations to the data in question before it is returned in the result set.

CREATE OBJECT
Create our own object type called PERSON_DETAILS.Then we create a table of PERSON_DETAILS called PERSON_DETAILS_TABLE.

CREATE TYPE PERSON_DETAILS AS OBJECT
       (USER_NAME     VARCHAR2(50),
        ADDRESS       VARCHAR2(50),
        LOCATION      VARCHAR2(50));
/
CREATE TYPE PERSON_DETAILS_TABLE AS TABLE OF PERSON_DETAILS;
/

PIPELINED Clause

Within the CREATE FUNCTION clause, there is a new option called PIPELINED. This option tells Oracle to return the results of the function as they are processed, and not wait for a complete execution or completion of the result set. This pipelining of the result set to one row at a time has the immediate advantage of not requiring excessive memory or disk staging resources.
PIPE ROW(out_rec)

The PIPE ROW statement is the interface or mechanism to send a piped row through the PIPELINED option through to the caller of the function.

Working with a simple pipelined function requires 2 things

  • collection type
  • pipelined function
 CREATE OR REPLACE TYPE number_ntt AS TABLE OF NUMBER;
CREATE FUNCTION row_generator(rows_in IN P_INTEGER) RETURN number_ntt
  PIPELINED IS
BEGIN
  FOR i IN 1 .. rows_in LOOP
    PIPE ROW(i);
  END LOOP;
  RETURN;
END;
/

The CSV_TABLE is a collection which has rows of comma separated value.

CREATE OR REPLACE TYPE "CSV_TABLE" as table of varchar2(32767)
CSV_TABLE
FUNCTION CSV_TO_TABLE(p_delimted_string VARCHAR2,
                         p_delimter        VARCHAR2 := ',')
  RETURN CSV_TABLE
  PIPELINED IS
  indexCount PLS_INTEGER;
  csvString  VARCHAR2(32767) := p_delimted_string;
BEGIN
  LOOP
    indexCount := instr(csvString, p_delimter);
  
    IF indexCount > 0 THEN
      PIPE ROW(substr(csvString, 1, indexCount - 1));
      csvString := substr(csvString, indexCount + length(p_delimter));
    ELSE
      PIPE ROW(csvString);
      EXIT;
    END IF;
  
  END LOOP;
  RETURN;
END CSV_TO_TABLE;

Input

A,B,C,D

Output

A
B
C
D

The above output is a collection.

 SELECT PACKAGE_NAME.CSV_TO_TABLE('A,B,C,D') FROM DUAL;

When the Column value is NULL it would be replaced by ‘~’ in output

  SELECT NVL(FAX_NO, '~') FROM DUAL;
  SELECT NVL(NULL, '~') FROM DUAL;

The Below query can be used in scenarios where 3 drop downs are used and Location is mandatory and loaded first based on which Area and Pincode should be populated and selection can be made based on that later.

  SELECT Name, Age, PhoneNo
    FROM Person 
   WHERE Location =  p_location and 
         Area =  NVL(p_area, Area) and 
         PinCode =  NVL(p_pin_code, PinCode);

Having more than one input in where clause
i.e

 SELECT * 
  FROM Person 
 WHERE Location IN('North Chennai', 'South Chennai');

When the same query is used in search screen it may have Three possible Values

  • NULL
  • Single Value
  • Multiple Value

The above query works for Single and Multiple value but does not work for NULL.The Above query works for NULL and Single Value but not for Multiple Value.

The below is a simple query which works when PERSON_ID is NULL, Single Value, Multiple Value(CSV).

 SELECT PERSON_ID
  FROM PERSONS P
 WHERE ((CASE
         WHEN 'P101' IS NULL THEN
          NULL
         ELSE
          'P101'
       END) IS NULL OR P.WATERFALL_ID IN ('P101'));

For Multiple Value we need to do slight modification.We need to convert the CSV values into table and give it as input.

 SELECT DISTINCT PERSON_ID
   FROM tblPerson 
  WHERE nvl(PERSON_ID, '~') IN (SELECT column_value
                                  FROM TABLE(PACKAGE_NAME.CSV_TO_TABLE(P_CSV_PERSON_ID))
                                UNION ALL
                                SELECT '~'
                                  FROM dual);

The select query in the where clause will take the value of P_CSV_PERSON_ID (Single or Multiple value) else it will take ~ in case it is NULL

The Other workaround to this is as below

SELECT PERSON_ID
    FROM tblPerson 
   WHERE (((CASE
           WHEN P_CSV_PERSON_ID IS NULL THEN
            NULL
           ELSE
            P_CSV_PERSON_ID
         END)) IS NULL
      OR PERSON_ID IN
         (SELECT *
            FROM TABLE(PACKAGE_NAME.CSV_TO_TABLE(P_CSV_PERSON_ID))));

where

 P_CSV_PERSON_ID= 'P101,P102'

For more details on CSV_TO_TABLE refer Link

The data you are trying to insert already exists and its a primary key with unique value

 unique constraint (DATABASENAME.PK_COLUMN_NAME) violated

When you try to insert some value which are not allowed for the column this error is thrown.Similar to ENUM of Java.

In the below example CATEGORY Column may have MALE and FEMALE But when you try to insert BOTH this is going to throw error since it is not allowed value.

CATEGORY IN ('MALE','FEMALE')
check constraint (DATABASENAME.CHK_CATEGORY) violated

If the Parent table referred by Foreign Key does not have the value then this is thrown

integrity constraint (DATABASENAME.FK_CATEGORY) violated - parent key not found

Whenever a table is created in Oracle its shld be done by Four step process

  1. Table Creation
  2. Adding Constraints(Primary Key, Foreign Key)
  3. Adding Synonyms
  4. Giving Grants
  5. Alter Queries

Table Creation

create table TABLE_NAME_1
( COLUMN_NAME_1   varchar2(50),
  COLUMN_NAME_2   NUMBER(1),
  COLUMN_NAME_3   DATE not null,
  COLUMN_NAME_4   NUMBER(38)
 );

Constraint
Query 1

alter table TABLE_NAME_2 
  add constraint FK_COLUMN_NAME_1 foreign key (COLUMN_NAME_1) references TABLE_NAME_1 (COLUMN_NAME_1);

Query 2

alter table TABLE_NAME_3 add constraint fk_column_1 
foreign key (COLUMN_1, COLUMN_2) references TABLE_NAME_2 (COLUMN_1, COLUMN_2);

Synonym

 CREATE PUBLIC SYNONYM TABLE_NAME_1 FOR OWNER_NAME.TABLE_NAME_1;
 CREATE PUBLIC SYNONYM SEQUENCE_NAME_SEQ FOR OWNER_NAME.SEQUENCE_NAME_SEQ;
 GRANT ALL ON SEQUENCE_NAME_SEQ TO ADMIN,PART,ALL_DEVELOPERS;

Grants

 GRANT ALL ON TABLE_NAME_1 TO ADMIN,PART,ALL_DEVELOPERS;
 GRANT INSERT,UPDATE,DELETE,SELECT ON TABLE_NAME_1 TO ADMIN,PART,ALL_DEVELOPERS;

Other Queries

 ALTER TABLE TABLE_NAME_1 COLUMN_NAME_1 NOT NULL;
 ALTER TABLE TABLE_NAME_1 MODIFY COLUMN_NAME_1 varchar2(50);
 ALTER TABLE TABLE_NAME_1 DROP COLUMN COLUMN_NAME_1;
 ALTER TABLE TABLE_NAME_1 DROP CONSTRAINT COLUMN_NAME_1;

Things to Consider

While creating Primary key(No Null Values allowed like unique key) as a combination of two or more columns you should take in to consideration only the non null columns as part of unique key.

 ALTER TABLE TABLE_NAME
   ADD CONSTRAINT PK_TABLE_NAME PRIMARY KEY (COLUMN_NAME_1, COLUMN_NAME_2, COLUMN_NAME_3)
  USING INDEX TABLESPACE INDX;

COLUMN_NAME_1, COLUMN_NAME_2, COLUMN_NAME_3 should be a Non Null column.

When deleting a Table to run new script the following should be done.

DROP TABLE TABLE_NAME;
DROP SEQUENCE SEQUENCE_NAME;
DROP PUBLIC SYNONYM SYNONYM_NAME;

In case you are deleting multiple Table the same thing should be done with tables grouped together as one below.

DROP TABLE TABLE_NAME;
.
.
.

DROP SEQUENCE SEQUENCE_NAME;
.
.
.

DROP PUBLIC SYNONYM SYNONYM_NAME;
.
.
.
.

The Child Table referencing the Parent table should be deleted first.

 DROP TABLE CHILD_TABLE_NAME;
 DROP TABLE PARENT_TABLE_NAME;

Checking if the table column is referenced some where by Child Table in form of Primary Key

SELECT TABLE_NAME AS "CHILD_TABLE"
       ,CONSTRAINT_NAME
  FROM ALL_CONSTRAINTS T
 WHERE R_OWNER = 'OWNER_NAME'
   AND CONSTRAINT_TYPE = 'R'
   AND R_CONSTRAINT_NAME IN (SELECT CONSTRAINT_NAME
                               FROM ALL_CONSTRAINTS
                              WHERE CONSTRAINT_TYPE IN ('P', 'U')
                                AND TABLE_NAME = 'TABLE_NAME'
                                AND OWNER = 'OWNER_NAME')
 ORDER BY TABLE_NAME
         ,CONSTRAINT_NAME;

Sequence

CREATE SEQUENCE TEST_SEQ
MINVALUE 1
MAXVALUE 999999999999999999999999999
START WITH 1
INCREMENT BY 1
CACHE 20;

Audit Trigger

CREATE OR REPLACE TRIGGER TRIGGER_NAME
BEFORE INSERT OR UPDATE ON TABLE_NAME
REFERENCING NEW AS NEW OLD AS OLD FOR EACH ROW
BEGIN
  IF INSERTING THEN
    :NEW.CRE_USER_UID := USERUID;
    :NEW.CRE_TIMESTAMP := SYSDATE;
  ELSIF UPDATING THEN
    :NEW.UPD_USER_UID := USERUID;
    :NEW.UPD_TIMESTAMP := SYSDATE;
  END IF;
/
END TRIGGER_NAME;

Note the / in the End of Trigger

UpStream and DownStream

In term of “flow of data”, your repo is at the bottom (“downstream”) of a flow coming from upstream repos (“pull from”) and going back to (the same or other) upstream repos (“push to”).

In terms of source control, you’re downstream when you copy (clone, checkout, etc) from a repository. Information flowed “downstream” to you.

When you make changes, you usually want to send them back “upstream” so they make it into that repository so that everyone pulling from the same source is working with all the same changes.

You cannot always make a branch or pull an existing branch and push back to it, because you are not registered as a collaborator for that specific project.

Forking

Forking is nothing more than a clone on the GitHub server side:

without the possibility to directly push back, with fork queue feature added to manage the merge request
You keep a fork in sync with the original project by:

  1. adding the original project as a remote
  2. fetching regularly from that original project
  3. rebase your current development on top of the branch of interest you got updated from that fetch.

Only contributor can approve the changes you pushed into fork for merge with original code

Clone

When you are cloning a GitHub repo on your local workstation, you cannot contribute back to the upstream repo unless you are explicitly declared as “contributor”.
So that clone (to your local workstation) isn’t a “fork”. It is just a clone.

Git

upstream generally refers to the original repo that you have forked
origin is your fork: your own repo on GitHub, clone of the original repo of GitHub

The general pattern is as follows:

  1. Fork the original project’s repository to have your own GitHub copy, to which you’ll then be allowed to push changes.
  2. Clone your GitHub repository onto your local machine
  3. Optionally, add the original repository as an additional remote repository on your local repository. You’ll then be able to fetch changes published in that repository directly.
  4. Make your modifications and your own commits locally.
  5. Push your changes to your GitHub repository (as you generally won’t have the write permissions on the project’s repository directly).
  6. Contact the Contributor once u have committed your changes in fork to pull it in orginal
Posted in git.